1
Table of Contents
Welcome Message from the local organizers
3
What are the Images of Virtuality?
4
The workshop’s Call For Papers & Posters
6
The workshop’s program
7
Authors & Contributors
8
Papers presented on the Pre-Workshop Day
11
Papers presented on the Workshop Day
54
2
IFIP WG9.5 "Virtuality and Society" International Workshop on Images of Virtuality: Conceptualizations and Applications in Everyday Life (IoV’09)
WELCOME MESSAGE
Dear colleagues and friends, It is our great pleasure to welcome you to IoV 2009, the biannual International Workshop of IFIP wg9.5 on Virtuality & Society, and to the beautiful and sunny Athens. Following on last year's successful International Working Conference on "Massive Virtual Communities" at Leuphana University Luneburg, this International Workshop will focus on conceptualizations and applications of virtuality in everyday life. Our initial aim was to gather together researchers and practitioners on virtuality that come from different disciplines and areas of expertise. As it becomes obvious from the workshop’s program, we managed to accomplish it: an exciting program composed of a well-balanced mix of full-paper presentations as well as abstract-driven sessions presenting latest original research on virtuality. This variety highlights not only the polysemy of the concept of virtuality but also the deep integration and pervasiveness of technologies of virtuality in different practices of our everyday life. To this end, we treat virtuality as a catalyst that (hopefully) will assist us to understand new ideas and reveal the innovative practices that we employ in order to experience urban spaces, information flows and representations, organizational structures, learning dynamics, and socioeconomic distortions. Of course, none of this would be possible without the contribution of the people that will present their works in the following two days, as well as the support of the Department of Management Science and Technology, the Athens University of Economics and Business, and the IST/OIS lab. Again, welcome to IoV ’09. We hope that you enjoy the program, the colleagues, the city!
Athens, April 2009 Dr. Angeliki Poulymenakou Anthony Papargyris IoV ’09 local organizers
3
Images Of Virtuality: a descriptive metaphor*1 We use the metaphor of an Image to characterize the perceptual experience of virtuality. Images of Virtuality come in different, unfamiliar forms and shapes: technological, artistic, or metaphysical. At their first sight, they are experienced as something new, time-space distanciated, or even an awkward simulacra. They are persistent or even sometimes 'selfish'. They challenge our familiar habitualised reality and taken-for-granted practical knowledge of our everyday life.
Virtuality As Technology Almost every technological artifact establishes a new, different, virtual way of acting in the world. Especially the Information Technology and its inherit capacity to transform information flows and representations, can supply our everyday life with new trajectories and velocities.
An implementation of the 'Hello World' program using JAVA The picture above illustrates one of the simplest program possible in a computer language. Every new student in computing cross this step: to command the machine to 'say' the phrase 'Hello World'. A trivia question: who 'says' hello to whom? The programmer to the machine's world, or to the non-virtual world through the machine? Or is the programmable and ready-to-interact machine that now can say hello to the non-virtual world? Actually, it is a matter of perspective. It is a matter of shifting our experience of that Image of Virtuality, yet from a logical point of view.
Virtuality As Art Beyond the functional dimension of every technological artifact, lies the aesthetical dimension. That is, what the very artifact as a narrative reflects to its observers in terms of emotions, imaginations and make-believe fantasies. The technology then becomes a palette of polysemic signs, the very scheme upon which the observer can experience an Image of Virtuality away from the here-and-now.
"Underground Park" by Costas Tsoklis
1
The ideas hereafter reflect the thoughts of the workshop's organizers and they are not necessarily part of the collective IFIP 9.5WG's views on virtuality and society. They are briefly presented here in order to stimulate a fruitful discussion on the conceptualization of the term 'Virtuality' but also to contextualize - if not legitimize- the workshop's title.
4
The picture above was taken from the Metro station of Ethiki Amyna at Athens. It shows the artwork named UNDERGROUND PARK by Costas Tsoklis. It is a synthesis of oil trees and mirrors in the middle layer of the station, just below the Messogion Avenue. The stairs at the middle lead to another underground (in reference to the park) level: to the platforms.
Virtuality As Metaphysical Singularity What are the Images of Virtuality? Are they 'real' (enough)? What is their substance? Where do they reside? In a desynchronised and deterritorialized space or in a non-space at all? And, above all, do we really need to know the answers to similar questions like those above in order to experience an Image of Virtuality? Or do we just tilt at windmills in a lost dimension?
"Don Quixote" by Pablo Picasso The meaning of the concept of singularity points back to cosmology and the agony for mathematical explanation and representation of the universal laws of the cosmos. It is used here as an oxymoron. The Images of Virtuality do not only lack on descriptive autonomy, but they demand an observer in order to make them 'real', a transcendental, yet a takenfor-granted part of our world. They are meaningful and coherent pieces of human civilization only through acting upon them.
5
CALL FOR PAPERS
"Images of Virtuality: Conceptualizations and Applications in Everyday Life" An IFIP Working Group (WG) 9.5 "Virtuality and Society" International Workshop April 23-24, 2009, Athens University of Economics and Business, Greece Organisers Angeliki Poulymenakou (akp@aueb.gr) Athens University of Economics and Business
Anthony Papargyris (apaparg@aueb.gr) Athens University of Economics and Business
Workshop Committee Chrisanthi Avgerou Information Systems and Innovation Group, London School of Economics and Political Science, UK, IFIP TC 9 Chair
Nikolaos Avouris HCI Group, University of Patras, GR
Kevin Crowston Inf. Studies, Syracuse University, USA
Paul C. van Fenema Netherlands Defense Academy, and Tilburg University, NL
The Athens University of Economics and Business (AUEB), and the Department of Management Science and Technology (DMST) are honored to host the IFIP WG 9.6 International Workshop on Images of Virtuality 2009 (IoV’09) in Athens, Greece. The event constitutes an opportunity to bring together theoretical conceptualizations and practical applications of Images of Virtuality manifest in human interaction with information, social, professional and technological systems, art, culture, and nature. Current applications of virtuality make use of various technologies such as WEB2.0 & 3.0, ubiquitous computing with RFID, GIS and GPS, mobile networks, intelligent agents and context-aware systems, to construct a Cyberspace of Virtual Worlds and social networks. Trends in virtuality point towards an integration of such elements at the point where the virtual and non-virtual meet into a new digital space augmenting everyday life interactions. The goal of this workshop is to bring together researchers and practitioners interested in presenting and discussing conceptualizations of virtuality and current applications of Information Systems that underpin virtual spaces of interaction.
Dimitris Gouscos New Technologies Laboratory in Communication, Education and the Mass Media, University of Athens, GR
•
David Kreps Business Management, Salford University, UK
Alexandros-Andreas Kyrtsis Political Science & Public Administration, University of Athens, GR
Niki Panteli School of Management, University of Bath, UK
Dimitris Papalexopoulos School of Architecture, University of Athens, GR
Anthony Papargyris Management Science & Technology, Athens University of Economics and Business, GR
Angeliki Poulymenakou Management Science & Technology, Athens University of Economics and Business, GR
François de Vaujany Institute of Business Administration, Grenoble University, FR
Martin Warnke Computer Science & Culture, Leuphana University Lueneburg, D.
Supported by
www.aueb.gr
Relevant topics and themes include, but are not limited to:
www.dmst.aueb.gr
• • • • •
Discussing problems of design, construction, adoption, and use of Information Systems in the context of virtuality Exploring new (e-,or v-) research methodologies and techniques on inquiring into social action in the context of virtuality Identifying challenging social, ethical, and political issues of socialization in virtuality Discussing the role of digital representation in multi-actor remote collaboration contexts, both professional and social Identifying opportunities and challenges for education, governance, and entrepreneurship in Virtual Worlds Discussing emerging issues of e-policy and e-quality of life specifically implicated by the Virtual
Full and short research papers are solicited for this event. Both types of contributions will be submitted to a double blind review process. The workshop will be a full day event and will be open to a maximum of 50-60 participants. A special session will also be available for virtual presentations (via Skype, Second Life etc). On April 23rd, a pre-workshop day is scheduled with one session with presentations of in-progress research work, plus a doctoral research meeting, offering PhD students the opportunity to present and discuss their work in an informal format (posters, case studies and demos are strongly encouraged). Submission instructions and author guidelines are available on the workshop’s website. Proceedings Both full and in-progress research contributions will be included in a CD-ROM published with an ISBN. Proceedings will be listed in major citation databases such as EBSCO, ABSInform, etc. Workshop contributions will also be considered for publication in a Special Issue of a related International Journal.
Sponsored by
Location The workshop will be held at the AUEB, Evelpidon Building site, in the center of Athens. istlab.dmst.aueb.gr
Under the patronage of
Important Dates • Paper submissions due: • Notification to authors: • Final papers submissions due:
February 15, 2009 March 8, 2009 April 5, 2009
For Formore moreinformation informationsend sendan anemail emailat: at:info@imagesofvirtuality.org info@imagesofvirtuality.org www.ifip.org
IoV’09 website: http://www.ImagesOfVirtuality.org
Images of Virtuality: Conceptualizations and Applications in Everyday Life An IFIP WG9.5 "Virtuality and Society" International Workshop April 23-24, 2009 - Athens University of Economics and Business, Greece AUEB's Postgraduate Studies (Evelpidon) Building TThhuurrssddaayy 2233 A Apprriill P Prree--w woorrkksshhoopp D Daayy 08:30 - 09:00, 6th Floor => Workshop Registration & Coffee 09:00 - 09:30, Room 609 => Opening of the pre-workshop day, Welcoming Message from the local organizers. Angeliki Poulymenakou, Anthony Papargyris Welcoming Message from the Chair of IFIP wg9.5. Niki Panteli 09:30 – 10:50, Room 609 => Short-Paper Presentations I Boundary conditions in architecture: The concept of the virtual architectural object. Antonios Moras From the architect-programmer to the architect-gamer: the videogame as an emergent tool of architectural design. Stylianos Giamarelos INDEX_ MESH: hacking public spaces into common places. Athina Stavridou, Sonia Tzimopoulou, Angela Kouveli, Dimitris Psychogios Dimitris Papadopoulos, Xenofon Papadopoulos, Michael Georgiou Aspects of virtuality: a space of conversions. Sonia Tzimopoulou 10:50 – 11:00 => Coffee Break 11:00 - 11:45, Room 609 => Short-Paper Presentations II Virtual destruction: the erosion of traditional teaching and learning processes? Simran Grewal Particle Physics and Virtuality - a discussion on researching a virtual distributed community for the LHC at CERN. Will Venters
Virtual Reality Internet Retailing (vrir): experimental investigation of interactive shopping interface – store atmosphere effects on userconsumer behavior Ioannis G. Krasonikolakis, Adam P. Vrechopoulos 12:40 - 13:00, Room 607 => Pre-Lunch Presentation AMBIVALENCE by Chrisdian Wittenburg 13:00 - 14:30, Room 607 => Lunch Break 13:00 - 14:30, Room 607 => Parallel Poster Presentations Basic analysis for usage of social bookmarking services: distinct platform as a tool for information management Yoshiaki Fukami, Hideaki Takeda, Ikki Ohmukai, Jiro Kokuryo Designing online educational games for the next generation Sofia Mysirlaki, Fotini Paraskeva New forms of electronic Governance in the Social Web: The Greek case Eleni-Revekka Staiou ActClick: bringing together NGOs with active citizens Elli Kousi 14:30 – 15:30, Room 607 => VIRTUAL CAFÉ Round table / Open discussion on “Conducting research on Virtuality & Society” 15:30 – 16:50, Room 607 => Case Studies The community of Athens Wireless Metropolitan Network Kostas Karampelas Building E- Government platforms through Second Life William Prensky, Evangelos Syrigos Virtual Reality: technology framework and case studies Konstantinos Loupos
11:45 – 12:00 => Coffee Break 12:00 - 12:40, Room 609 => Short-Paper Presentations III Objects in mirror are closer than they appear Mariella Azzato Sordo, Cristian Alvarez
16:50 – 17:00, Room 607 => Closing of the pre-workshop day Angeliki Poulymenakou, Anthony Papargyris
FFrriiddaayy 2244 A Apprriill W o r k s h o p Workshop D Daayy 09:00 - 10:00, 6th Floor => Workshop Registration & Coffee
15:00 – 15:15 => Coffee Break
10:00 - 11:00, Room 609 => Paper Presentations I: Concepts of Virtuality Virtuality: a fertile and prosperous research field Niki Panteli Virtual oversight over transnational capitalism David Kreps, Renfred Wong
15:15 – 16:15, Room 609 => Paper Presentations IV: Experiencing Virtuality in Images & Life Events Virtuality in images Martin Warnke Digital death Stacey Pitsillides, Savvas Katsikides, Martin Conreen
11:00 - 11:15 => Coffee Break 16:15 – 16:30 => Coffee Break 11:15 - 12:45, Room 609 => Paper Presentations II: Virtuality & Work Non governmental organizations in Greece gone virtual Symeon Ververidis, Iraklis Varlamis Cultural and social inhibitors to the adoption of virtuality in the workplace Elizabeth Duncan “Clusters of Competence” in the distributed development of grids: a particle physics community’s response to the difficulties of global systems development. Avgousta Kyriakidou, Will Venters
16:30 – 17:30, Room 609 => Paper Presentations V: Virtuality, Law & Social Networking The legal frame of internet technology of chatting, social networking and virtual worlds Thomas Papaliagas E-learning social network analysis for social awareness Niki Lambropoulos
12:45 - 14:00 => Lunch Break
17:30 – 18:00, Room 609 => Closing of the workshop day Epilogue Angeliki Poulymenakou, Anthony Papargyris
14:00 – 15:00, Room 609 => Paper Presentations III: Architecture, Virtuality & Urban Design E-democracy spaces Ioannis Orfanos, Dimitris Papadopoulos Intravein - parametric urbanism Pavlos Xanthopoulos, Yiannis Orfanos, Brian Dale, Gerard Thomas F. Joson
18:00 – 18:45, Room 609 => IFIP WG 9.5 Business Meeting 20:30 => SOCIAL EVENT
List of Authors and Contributors [in no particular order]
Mariella Azzato Sordo is Associate Professor of Department of Design, Architecture and Fine Arts at Universidad Simon Bolivar University (U.S.B), Caracas, Venezuela. Architect (U.S.B., 1991), Specialist on Educational Computing (U.S.B, 1999), and Ph. D. Candidate on Education at Barcelona University, Spain. She has published almost 20 articles on communication philosophies in the design of instructional screens. She is currently Academic Coordinator of Multimedia Services at U.S.B., and takes part as member of the team of professors to design the Liberal Studies and Arts university degree at U.S.B. Cristian Alvarez Arocha is Titular Professor of Department of Language ad Literature at Simon Bolivar University (U.S.B), Caracas, Venezuela. Ph. D. in Literature (U.S.B., 2001). He was Dean of General Studies in U.S.B. (1996-1998). He has published books and almost 40 articles on literary criticism. Some titles of his books are Ramos Sucre y la Edad Media (1990; 1992), Salir a la realidad: un legado quijotesco (1999), La "varia lección" de Mariano Picón-Salas: la conciencia como primera libertad (2003), ¿Repensar (en) la Universidad Simón Bolívar? (2005) and Diálogo y comprensión: textos para la universidad (2006). He leads currently the team of professors to design the Liberal Studies and Arts unniversity degree at U.S.B. Martin Warnke, born 1955, is head of the computer and media service of the Leuphana University of Lüneburg. He graduated in physics, acquired the status of an associate professor in computer science. Since 1983 he teached contiously in Lüneburg, Basel and Klagenfurt. In several projects together with artists and students he has developed and produced artistic works on digital media. His interdisciplinary methodology consists of hypermedia, interactive systems, image processing and media technology. He has successfully completed a number of research projects. HyperImage, funded by the german bmbf, is under way (www.hyperimage.eu). During these projects the XML standard PeTAL (Picture Text Annotation Language) has been developed. It now enters the scientific community. He is second speaker of the IFIP working group "Virtuality and Society". Niki Panteli is a Senior Lecturer in Information Systems at the University of Bath School of Management, UK and the Chair of the IFIP - International Federation of Information Processing W.G. 9.5 on Virtuality & Society. Her main research interests lie in the area of IT-enabled transformation, virtual teams and virtual collaborations and computer-mediated communication. She can be contacted at: N.Panteli@bath.ac.uk Simran Grewal is a lecturer in Organisational Studies at the University of Bath. Her research interests take an emerging critical stance in exploring the impact of virtual technologies in organizations and society. She has been invited to present her work at international conferences and is currently working on a number of research projects which examine learning, social identity, communication and leadership in virtual settings. Simran is open to the adoption of innovative pedagogical tools and has recently developed a teaching and learning approach which involves the use of Second Life. Ioannis Orfanos is a PhD Candidate School of Architecture NTUA / MArch: Design Research Laboratory, AA / MPhil: Architecture - Space Planning, NTUA / Architect NTUA Pavlos Xanthopoulos is a PhD Candidate School of Architecture NTUA / MArch: Design Research Laboratory, AA / Architect NTUA . Elizabeth Duncan, BSc, MSc (Inf St) is a professional information scientist, with a degree in Physics and Maths from Edinburgh University, and MSc from Sheffield, and a varied career in industry and academia: Edinburgh School of Agriculture, Institute of Occupational Medicine, Commonwealth Bureau of Nutrition, and the University of Aberdeen as research fellow. Set up own
8
company business in 1990, with over 1500 registered remote workers, currently running consultancy and seminars on remote work management.
Dr David Kreps is Secretary of the IFIP Working Group 9.5 on Virtuality and Society, and has published widely in Information Systems journals and conferences on eAccessibility and Virtuality. His background in Cultural Studies and Sociology foster a critical approach to his research in IS. David Lectures in Web Development, Emerging Technologies and Technoculture at the University of Salford. d.g.kreps@salford.ac.uk Dr Sonia Tzimopoulou is an Architect. She graduated in 1996 from Aristotle University of Thessaloniki, with Diploma in Architecture. In 1998, she received the degree Master of Science with Distinction in Built Environments: Virtual Environments from the Bartlett School of Graduate Studies of University College London. Her thesis project was entitled "Exploring form creation with evolutionary design". In 2006 she received her PhD degree from the School of Architecture, Faculty of Engineering, Aristotle University of Thessaloniki. She is working as an architect from 1998. Her research interests are related to the use of IT in architecture. contact: soni@ath.forthnet.gr Chrisdian Wittenburg: Born in 1965. He holds a Diploma in psychology (1995) and a Diploma in fine arts (2000). He currently works as an artist at U.T.E.e.V. (is a charity-foundation, which he founded in 1997, www.ute-ev.de). Sofia Mysirlaki: is a Post-Graduate student of University of Piraeus. She has a degree in Technology Education & Digital Systems from the University of Piraeus (Greece). She has many years of experience in teaching adolescents and she currently works as a computer teacher in primary education. Her research interests include new technologies applied in education. She is currently researching the potential of using digital games and virtual worlds as educational tools. Fotini Paraskeva: Assistant Professor in the Department of Technology Education & Digital Systems, of the University of Piraeus (Greece). She has graduated from the Department of Education, Athens University. She holds a Ph.D in Educational Psychology. She is also carrying out research in ICT in education with emphasis in learning environment. She has had extensive experience in education, especially teaching adult groups. Her research area covers theories of learning and applications to the classroom as well as in the fields of ICT in education. Main interest is in instructional methodologies enhancing the learning environment. Iraklis Varlamis is a lecturer at the Department of Informatics and Telematics of Harokopio University of Athens. He is also a Teaching Fellow at the postgraduate programme of Panteion University in Virtual Communities. His research interests vary from data-mining and knowledge management to virtual communities in the web. Symeon Ververidis is a teacher at the 3rd Elementary School of Arsakeio Psychikou. He is also a student at the postgraduate rogramme of Panteion University of Athens: Virtual Communities: Socio-psychological Approaches and Technological Applications and an associate of the publications: “Kathimerini - Special Editions - The Economist�. Ioannis Krasonikolakis is a PhD Candidate at the Athens University of Economics and Business (AUEB), Department of Management Science and Technology since 2008. He investigates virtual reality retailing store atmosphere effects on consumer behavior through an interdisciplinary approach, employing Information Systems and Marketing disciplines in his research. Dr. Adam P. Vrechopoulos is an Assistant Professor of Digital Media and Personalized Services at the Athens University of Economics and Business (AUEB), Department of Management Science and Technology. Since 2003, he is the Scientific Coordinator of the "Interactive Marketing and Electronic Services" (IMES) Research Group of the "ELTRUN" E-Business Research Center (www.eltrun.gr) at AUEB.
9
Eleni-Revekka Staiou was born in 1984 in Athens. She has an undergraduate degree in Communication and Mass Media, University of Athens and also an MSc in Communication, Information and Society from London School of Economics and Political Science (LSE). Now, she is a Doctoral Candidate in Faculty of Communication and Mass Media Studies, and Researcher in Laboratory of New Technologies in Communication, Education and the Mass Media, University of Athens. Yoshiaki Fukami is a Doctoral student at graduate School of Media and Governance, Keio University. His research topic is technology of handling information representing context, and development of utilizing user generated metadata. His email address is yofukami@sfc.keio.ac.jp Hideaki Takeda, Ph.D: Professor at National Institute of Informatics and University of Tokyo. His interest includes knowledge sharing system such as ontologies and community support systems, robotics such as intelligent artifacts, and design theory. website: http://wwwkasm.nii.ac.jp/~takeda/index.html Ikki Ohmukai, Ph.D: Assistant Professor at Digital Content and Media Sciences Research Division, National Institute of Informatics, His research Fields:Semantic Web, Information and Knowledge Sharing, Community Informatics website: http://www.nii.ac.jp/index.php?action=pages_view_main&page_id=368&lang=english Jiro Kokuryo, DBA: Professor at Keio University's Faculty of Policy Management as well as its Graduate School of Media and Governance. He also serves as the Executive Director of Keio Research Institute at SFC (Shonan Fujisawa Campus).His research and teaching interests center on the development of business and social models that maximize the benefits of information technologies. website: http://www.jkokuryo.com/en/index.htm
10
BOUNDARY CONDITIONS IN ARCHITECTURE. THE CONCEPT OF THE VIRTUAL ARCHITECTURAL OBJECT. Moras Antonios Architect, MA student, National Technical University Of Athens Abstract This paper focuses on the ontological conditions necessary for the constitution of an architectural object that can involve in procedures that unfold in real time. It examines the changes inflicted by the constitution of the virtual object in opposition to the relations of representation-critique (western metaphysics) and the consequences in the understanding of time, the relation of the fragment to a totality and the dipoles produced by this correlation. Elements from the theory of systems (informational, biological) and the function of force-fields are used as mediums in understanding the emerging conditions, resulting in the importance of the examination of systems in conditions of crisis and the necessity of their incorporation as catalysts in the re-radicalization of the concept of the object.
1. THE CONCEPT OF FRAGMENT IN WESTERN METAPHYSICS Western metaphysics is based on the relation of subject-object in which the dipoles of representationrealityi and criticism-representation are interjected, through which any relationship between separate things can be understood. On the two dipoles of representation-reality and criticism-representation, the relation of possible-real emerges internally. Thereby, representation constitutes a possibility of the real, while criticism constitutes a possibility of representation. Anything we are aware of comprises either a representation or a criticism of representation. Representation constitutes a fragmentary expression of the real and criticism expresses possibility in representation. Therefore, possibility is unvariably placed in opposition to reality. It realizes itself, so it has no reality on itself, as it fragmentarily emerges either as criticism or as a representation of reality. The sense of possible is related to the sense of real, in accordance to the resultants of similarity and constrainment. The possible expresses a pre-existing image of the real and it is realized through its forces of attraction to the real. Therefore, even though the possibleii constitutes a phantomiii entity, due to its attraction to the real, it appears as a truthful and faithful copy of the real. The necessary condition for this to happen is the function of constrainment. The function of constrainment filters the totality of the possibilities of the real. Even though anything can exist as a possibility (of a phantom or an image) of the real, not everything can be realized, cause then “…the world would become saturated in a clamoring instant and historical time would be annihilated altogether. Everything would not only happen once, but would indeed already have ‘happened’.”(Kwinter, 2002. p7) In the same manner, the fragment can be understood as part of a totality, as an imperfect representation of a totality. The fragment can be abstracted by a totality but it can never be produced in means of augmentation. It is taken as a given that the rules that condition the fragment are intrinsic in the order of the totality and they can not differ; for then they could not be perceived as such. If the rules of the fragment differ, then they have to be conceptualized as parts of another totality, cause else this difference would be contrary to the basis of the unique hypostasis of the object and the favor of a unique point of surveillance that epitomizes god (in religion), the existence of “grand narratives” of universality (philosophy) or the scheme of logos and logocentrism (philosophy and sciences) to name but a few. The existence of a unique point of surveillance is made possible by conceptualizing an exteriority that can control the versions of reality through the functions of similarity and constrainment.
The consequences of the existence of a unique point of surveillance in architecture can be synopsized in what we call typology. Typology refers to the incorporation of a type either as it is (function of representation) or as a standard that is modified-reinterpreted (function of critique). The prevailing/dominating tradition is considered as a fact that can not be disputed, as a carrier of principles (social, political) and meaning (metaphysics). The underlying scheme of tradition is representation, from which derives the appeal to the ideal (grand narrative, logos) as the validating principle. The constant appeal to types and their modification (critique) improves them by adjusting them to the varying social standards and even though this effort is futile, this repetition is not different than the repetition of its birth, the point of conception of the concept of history and its creator, the history of architecture and the history of the architect. The repetition of type and typology validates the new as a new condition that has to be subsumed in the scheme of continuity of logos. Logos emerges in the sense of the architectural typology as an interiority of architecture. As a consequence, this condition could be conceptualized as historical formalism, the obsession repeated in the sequence and recitation of historical/architectural events/types-typologies. A reality repeated is realized as an image (of an immune to time, timeless ideal world) of that exteriority that can control and oversee. The architectural object as a timeless totality is considered as a perfect organized total, which has a definite and specific function. The relations between its parts are metric, stable and immutable and usually can be attributed to an ethical-aesthetic rule. Any change, transformation, deformation or destruction of the architectural object is corruptive, as it deviates from its initial, perfect condition which is totally attributable to an architect-creatoriv. Architecture is the image of the architect. Fragments can only be conceived as parts or degenerate/unfinished forms that are obliged to discipline and routinize to the rules of the architectural conception and its creator.
2. EXPANDING THE CONCEPT OF FRAGMENT. Expanding the concept of the fragment, the way according to which we conceptualize anything that is fragmented, presupposes a line of thought that does not affiliate the dipoles of representation-reality and critique-representation and consequently a line of thought beyond control and surveillance. Instead of examining what a system is composed of, resulting in functions of similarity, we should focus on the interactions that elements are involved in. Instead of dealing with a fixed object, we are facing a world as an opened up, expanded body of relations or as a force field. The body and its parts are not immune to the correlations or forces that exist and as a result neither the totality, nor the fragments ascribed to it are. According to Sanford Kwinter (Kwinter, 2002. p7), we can assume that a force field is comprised by micro and macro architectures. The term micro refers to those architectures-correlations that enclose, while the term macro refers to those architectures-correlations that are enclosed. Micro and macro architectures are variable as force fields. They are constituted not as structures comprised of specific elements, but as fields and subfields of forces, that are able to diffuse forces and subsequently are intrinsically attributed with the characteristic of reconfiguring in different formations. Force fields vary according to the energy or the amount of information that circulate in and through them. Depending on the amount of information, forces are activated or de-activated, stimulated or calmed and in general information adjusts the dynamics of the system. The first issue that we are faced with is that of the proximity of the object and the fragment. This issue can be resolved with the range/scale of interactions in which an object is involved, compared to what is conceived as fragmented. In this case, the questions how inside a forcefield something as fragmented emerges, how a forcefield can emerge as fragmented, restore metaphysics, as the function of similarity is applied on the level of interactions. The expanding of the concept of the fragment is possible by expanding the actions that it can incorporate, in other words those interactions that are not intrinsic to the conception of the body/field. The intrinsic interactions of the body/field lead us to a conceptualization of the fragment as part of a totality. As a result we can conceptualize the fragment either in correlation to the discontinuities that emerge in the body/field or in correlation to an exteriority. Nevertheless both these possibilities can be examined as one, if ιν error can be conceived as an expression of exteriority, because in any other occasion the fragment could not be understood as such.
12
We can conclude that the fragment emerges in and through a body / field during an intensive condition, that is closely connected to the boundary conditions (exteriorities) or conditions when the system is in a crisis (error). The fragment either emerges via the mutation/transformation of the correlations in the field (error) or it constitutes the field where transformation occurs through the application of correlations that are seemingly conflicting (exteriority). The fragment emerges either as a hybrid (exteriority) or it emerges as a metastasis (error). As a result the fragment can be conceptualized as an intrinsic modality of the expanded object. The problem that concerns the ontology of the fragment can be rephrased as, which are the presupposed conditions for the embodiment of the fragment when it emerges? How can we embody it and what can we do with it/what can it do?
3. THE RECONFIGURATION OF THE CONCEPT OF THE OBJECT AND THE RE-RADICALIZATION OF THE SENSE OF TIME AS INTERIORITY. INTERACTION AND THE VIRTUAL ARCHITECTURAL OBJECT. In the scheme of a single force that effects on an object, the temporality of transformation is the time during which the force applies to the object, therefore an exteriority to the object. The time of the forcefield is a field of temporalities, temporalities that are connected with the epimerized forces. Time as an interiority of the field appears with different qualities as it constitutes the cohesive structure and necessary ontological condition of a system, due to time-transformation coidentity. The way in which a system transforms is the way time transforms, the same way its configurations/morphemes interact. Therefore we have the transition to a kind of time that is qualitative. Instead of a time that imposes itself as normality, time is regarded as a process of becoming. Time is an affect and not an effect. Intensive and not extensive. Time as an effect, affects the object by constituting it as an entity which is and as a result bears an identity, which can be discovered as far as the entity’s limits are finite-closed, because identity is concrete. As a result, any change is enforced by transforming the way in which the object is, the way the object is inscribed as a totality in its environment. This inscription constitutes the object’s devotion to the environment, which presupposes the devotion and adaptation of the object to the events that occur in it; therefore, time is normalized as the object loses its interiority, because the only possible movements are directed from the one to the other, from the environment to the object, or from the object to the environment. Within this movement, the western metaphysical scheme is restored. The necessary condition for the emergence of the affect demands the constitution of the object as a system or a body, which is surrounded by other systems-bodies that ontologically can be affected / interact, hence the sense of the limits of a system and the sense of the limits of a body need to be opened up and to expand to a scale that their integrity/identity is questioned. Therefore, a definition of the future is possible. The future is not just what might come and consequently would effect the system. The future is a system that is constituted as an open body, thereby the future is open to the logic of the events that took or will take place. “…duration is what it differs from itself. Matter, on the other hand, is what does not differ from itself; it is what repeats itself”. (Deleuze, 2002. p37) Differentiation though should not be thought as tautological with interaction. In such an occasion, evolutionary time would be considered as an interaction only because of its emergence. The necessary condition is the emergence of time as an otherness to itself through duration. In other words, we should avoid the eidetic logic and evolution, what is generally called “the eidetic path”v which encloses the element of pure eminence between pre-configured groups.
Instead of the possible-real dipole for any configuration, the virtual-actual relation that affects every configuration is proposed. Actual is what actualizes the virtual, though the actual never fully realizes what the virtual supposes, in a way that something always remains unrealized. For Bergson, the virtual is a power that comes from the past, which stays with us in the present, until it is actualized. It is not something that is incompletely expressed, because it is not prefigured. It is the dynamics of an element that under certain circumstances is actualized within the present. “Virtuality exists in such a way that it actualizes itself as it dissociates itself; it must 13
dissociate itself to actualize itself. Differentiation is the movement of a virtuality actualizing itself.” (Deleuze, 2002. p40) Past and present converge in a way that neither one stays intact. This presupposes that the past is never really gone, but it is actualized as a recollection in every moment for the whole temporal duration of the world as an expanded body-system. The transition from possible to real is, according to Bergson, anticipated. The relation of virtual and actual is one of surprise, because virtuality, as a contraction of temporalities, promises something different to actuality. Virtuality produces and always includes in it the possibility of something different instead of the actual. (Grosz, 2001. p11) Instead of a representation, virtual is intrinsic and a differentiation (an actualized virtuality) in two ways according to Gilles Deleuze: first of all because no memory is similar to another cause it is singular and secondly because it contributes difference making every moment a new one. (Deleuze, 2002. p45) Nobody can directly define what is virtual, because in the transition from being to existing virtual is already actualized. In the actualization process of the actual, the virtual abolishes itself so that it can reemerge as actual, which produces its own virtual multiplicities. “The virtual thus is not an abstraction, a generality, or an a priory condition. It doesn’t take us from the specific to the generic. It increases possibility in another way; it mobilizes as yet unspecifiable singularities, bringing them together in an indeterminate plan.”(Rajchman, 2000. p11) Therefore, what constitutes the past (or else tradition) of a configuration can not be divided from its present, because that way it would presuppose the past (tradition) as an exteriority of the same configuration. The past is a modality that is beared as a virtual part of the present but in a way that it (or tradition) is not alien but intrinsic, in a word inattributable. The inattributable is interweaved in architecture with the movement towards the incorporation of interaction as a medium for design. The approaches to interaction can be synopsized in the way that the object results as the synergy of its environment. How can the object emerge through interaction and how can an object that does not have a standard form / function transform interacting with whoever is using it? The emphasis put on the process of the production of the object through interaction appeared with the first efforts to deal with the management of the environment as information (quantification of the parameters of the environment, extensive entities). The management of the environment as information presupposes the conception of the environment as a space of flows, where information can be transmitted smoothly, as well as the design of a protocol of quantification (transforming qualities to information) of qualitative attributes. There are, generally, three types of functions that a system performs with information (Kwinter, 2002. p23):(i) it imports information from its environment (this modifies both the system that imports and its environment), (ii) it exports information to its environment producing this double effect, but this time as an asymmetrical eversion, (iii) it transfers information from certain levels of the system to different levels of the same system, causing events that could be unpredictable with respect to the structure of the system, the process and the scale of the side-effects. By importing information, a system modifies its own function. As a system functions according to the information that it imports, neither all kind, nor all the amount of information is valuable as there might be an amount that is not incorporated either because of information incompatibility or because of system overload. Most of the times, it is impossible to detect incompatible information with regard to the transformation of a system, but such information affect a system usually at extreme conditions by cushioning its transformation. In other words, the amount instead of the type of information is more important to the function of the system. This approach was widely used by architects-urbanists with structuralist backgrounds since the 60’s. The main critique was focused on the behaviorist models used to design the protocols for the quantification/digitalization of qualitative features. The answer to the problem of quantification/digitalization of qualitative features, such as the knowledge that derives from the study of a context, came with the use of the diagram. Between digitalization and knowledge, it would be wrong to define digitalization as one of the many forms that knowledge can take as if knowledge was an instrument that formulates. On the contrary, digitalization/quantification is a technology of power and has to be understood ontologically, in means of a diagram, in the sense of Deleuze or Foucault, as something that refers to panopticism or fold, as the activation of a power in unformed matter and the not finite function, producing formalized matter as visibilities and functions that can be defined as assertions. When Deleuze writes about Foucault’s panopticism he refers to that as
14
a diagram of power (Deleuze, 1986. p38). The diagram of relations between forces is a non-unifying immanent cause coextensive with the whole social field. “It is precisely because the immanent cause, in both its matter and its function, disregards form that it is realised on the basis of a central differentiation, which, on the one hand, will form visible matter, on the other will formalise articulable functions.” (Deleuze, 1986. p38) Diagramming a space is to carte this space and not to calque it, a map that does not map anything preexisting but on the contrary, it designates zones of indistinction from which becoming processes might emerge, if they are not already unfolding without being understood. In other words, social space can not be rendered with Cartesian coordinates, as it always envelopes many sub-spaces that insert distances and adjoins of an impossible to be calqued kind. (Kwinter, 2002. p49) A calque is a process that depicts points and should be understood as a correspondence. A carte, on the contrary, designates areas that bear a dynamic and that it is not known whether or not they will have a becoming. Zones of indistinction are epigeneticvi. When Deleuze discusses the fold as that which mediates between virtuality and actuality, he uses the term as a diagram ... It is not the organisation of matter into some visible form, nor the finalisation of matter into function. Rather it is the virtual relations of force that destabilise the determinable and the articulable into the new. (Jackson, p11) The concept of the diagram indicates that what is conceived as the architectural environment/context should interconnect its consisting parts and the interactions between them, and not act as a controlling or hierarchical device. This requirement demands a non-mediating environment, an environment that can be ontologically conceived as a series of rules of interaction. A series differs from a set, a class, a type, or a totality in that it remains open to forces of divergence and deviation. (Rajchman, 2000. p62) We could argue that a series consists of intensive instead of extensive (DeLanda, 2005. p80) quantities, singularities instead of particularities, as they interfere synergetically in linearity and they render it multiple. (Wagensberg, 2003) Accordingly, the architectural environment/context should enable: (i) the coexistence of configurations, (ii) the potential of reciprocal access to them, (iii) the potential of negotiating the terms according to which interaction can happen and (iv) the opening-up of the configurations in communication and acceptance of the rules of interaction. In such an architectural environment/context, the condition of negotiating the terms according to which the opening-up of configurations can happen is the event of the emergence of the fragment, the event of the expanding of configurations into inexact interaction. It is the event of expanding interaction between anexact configurations. Many models that aim to satisfy the presupposed conditions of the architectural environment/ context have been introduced. From the concept of “animate form” by Gregg Lynn and the concept of “forcefields” by Sanford Kwinter (both focusing on topology), to the application of the work of sociologist Manuel Castells on “space of flows” in architecture and Kas Oosterhuis’s concept of “swarm architecture”, the focal point revolves around the properties of a non mediating architectural environment/context. The interaction between users and architectural objects is met as a dimension of the same problem.
4. THE ATTRIBUTES OF THE EXPANDED ARCHITECTURAL OBJECT The expanded architectural object can not be isolated from its environment, it doesn’t have exact form but some kind of virtualized form, it has forms in singular number. It is part of an unstable field of interactions and it can not be reduced to an elementary item. Instead, it is organized around a hybrid serial genetic algorithm constituting a quasi interiority instead of an identity. The hybrid serial genetic algorithm provides the conditions for the emergence of a kind of identity that is based on the virtualization of a series of interactions between the configurations of an expanded architectural object. The constitution of a quasi interiority is necessary for the preservation of the operativity of the object in relation to its environment (larger scale configurations). This quasi interiority does not represent the configurations of an object, but it impels the instantaneous conditions of interactions, that are based on serial logic, and aims to actualizing the virtualities of divergence and deviation, that modify the limits of the expanded architectural object and the modalities of its environment, that it can interact with forming new configurations with emerging quasi interiorities.
15
References Kwinter, S. (2002) . Architectures of time, toward a theory of the event in modernist culture, The MIT Press, London, England Deleuze, G. (2002) . deserted islands and other texts 1953-1974, semiotext(e) foreign agent series, paris Grosz, E. (2001) . Architecture from the outside, essays on virtual and real space, The MIT Press Cambridge, Massachusetts, London, England 2001, p11 Rajchman, J. (2000) . Constructions, The MIT Press Cambridge, Massachusetts, London, England Deleuze, G. (1986) . Foucault, University of Minnesota Press, Minneapolis. Kwinter, S. (1993) . Soft systems. In B. Boigon (Ed), Culture lab 1 (pp.210-216). Princeton architectural press. DeLanda, M. (2005) . Space: extensive and intensive, actual and virtual. In I. Buchanan & G. Lambert (Ed), Deleuze connections, Deleuze and space, Edinburgh University Press, 2005. Wagensberg, J. (2003) . Synergy. In The metapolis dictionary of advanced architecture, Actar, Barcelona, 2003. Mark Jackson, diagram of the fold, the actuality of virtual architecture, retrieved from http://www.ifib.uni-karlsruhe.de/web/ifib_dokumente/downloads/mark_jackson.pdf
i
Reality should be understood as this condition that is also closely connected to the sense of real. The sense of possible is also closely linked to the mathematical expression of the probable. The probable follows the laws and theorems of the theory of probabilities. iii The possible is a phantom entity because at first it is not realized and at second because it doesn’t possess any finite form but it is a totality of possibilities fully attributable to and controlled by the real through the constrainment of similarity. An image, as opposed to a phantom entity, constitutes a possibility of reality as a realized representation of the real while a phantom entity constitutes the totality, the whole, of these images. Derrida connects etymologically phantom and phasm and claims that it is a pure matter of recurrence. (see Jacques Derrida, the phantoms of Marx) He also connects it to the words revenant (the returning dead), spectacle, fantome and all the words mentioned before to the sense of mystical. iv The architect can be thought metaphysically. v Eidetic is an Aristotle’s term referring to that which is connectd with eidos (kind), with the sense of form, with the ideal sense of beings. vi “‘Epigenesis’ is the term used to describe the relatively mysterious process of how form emerges gradually but dynamically out of a formless or homogenous environment or substrate. In embryology –add in much modern philosophy- it remains a theoretical question of how specific features can emerge from nothing, how a differentiated emryo can emerge from the blastula, a field of identical cells.” (Kwinter, 1993. p214) The term epigenesist refers to the process that takes place in an epigenetic landscape that was devised by Conrad Waddigton in order to explain morphogenesis using non-linear soft systems. Spuybroek uses the term epigenesist in the same sense. ii
16
FROM THE ARCHITECT-PROGRAMMER TO THE ARCHITECTGAMER: THE VIDEOGAME AS AN EMERGENT TOOL OF ARCHITECTURAL DESIGN Stylianos Giamarelos Architect NTUA Grad Student, School of Architecture NTUA, MArch “Design-Space-Culture” Postal address: Grammou 33, 14122 N. Iraklio Athens, Greece e-mail:giama_wins@hotmail.com Abstract: The investigation of the possibility of an architect adopting the videogame as yet another tool available to the design process toolbox forms the core axis of this research in-progress. Architect Kas Oosterhuis’ text “Swarm Architecture II” (2006) serves as launch pad for this study that seeks to investigate interrelations that can be established between architecture and gaming concepts. In spite of what he may initially seem to evangelise, the fact is that Oosterhuis is reserving the sole role of the programmer for the architect. The task of programming the structure of a game retains –without actually canceling- the primacy of a top-down viewpoint during the architectural design process. Yet, an architect-programmer is not granted access to the design benefits an architect-gamer might be able to extract, through his immersion within a virtual environment that would involve him with a bottomup viewpoint during the design process. MVRDV/DSD recently proposed the Spacefighter platform (2007), a project that seems to point towards the direction of the architect-gamer. By actually playing a game that attempts to model the multiple levels of complexity of the contemporary design process, based on a multitude of possible interactions of flows of information stemming from global and local databases, the architect can be educated in accurate decision-making and planning. By playing the design game, he will soon discard mere intuition in favour of the development of strategies that produce the desired outcomes. Spacefighter helps in shaping our first visions for the videogame as another tool of the design process for contemporary architects that could rise as both programmers and gamers of space design. Yet, Spacefighter is only a platform at the moment; a kind of docking bay for other software that can cooperate with the platform and focus on a more specific scale or other characteristic of the design process. A further exploration of the innate properties of the act of gaming for the production of such software might well end up in the best possible level of reconciliation of our two main design viewpoints (third-person/top-down and first-person/bottom-up) so far.
I. INTRODUCTION The investigation of the possibility of an architect adopting the videogame as yet another tool available to the design process toolbox forms the core research axis of this study, whose macroscopic aim goes as far as the development of a new type of software that could serve the architectural design process. At its current stage, the study aims at a primary theoretical investigation of the consequences of an instrumental appropriation of concepts stemming from the field of gaming, during the architectural design process; an appropriation that is anticipated to lead to a further specification of our demands for the proposed software as well as an initial layout of its possible form; in short, an assemblage of the basic theoretical qualifications of an initial experimentation towards that direction.
II. KAS OOSTERHUIS AND THE ARCHITECT-PROGRAMMER Architect Kas Oosterhuis’ text “Swarm Architecture II” (2006) is regarded as the ideal launch pad for this study. Entering our main subject through the viewpoint of this particular text sets the ground for the ensuing discussion. “Swarm Architecture II” is interpreted as a point of arrival of a specific
17
direction within the wider research history of this particular Dutch architect, which revolves around interrelations that can be established between architecture and gaming concepts. This certain direction of Kas Oosterhuis’ research, whose first appearance dates back to his 1998 texts, as well as the fact that he has so far been the main organiser of two international conferences (Game, Set and Match, 2001 and 2006), whose purpose was to bring together architects, computer programmers and game designers, enable him to emerge as a unique figure within the domain of international architectural bibliography when considering the main targets of this study. By incorporating particular fragments of many previous texts of his in the main body of “Swarm Architecture II”, this text can be read simultaneously as a manifesto as well as a retrospective. Meanwhile, an overview of the total body of Oosterhuis’ texts concerning gaming since 1998 allows for an investigation clarification of the way in which he uses the term ‘gaming’ in relation to his architectural production, both at the latter stage of the final built product of architecture and at the earlier stage of design which is the main focus of this study. Throughout these texts, gaming is not always correlated with a specific stage of the architectural design process, thus we cannot discern a relevant distinction in the way Oosterhuis uses the term ‘gaming’ himself. Thus, we are obliged to adopt a critical approach to his basic statement concerning an architecture interrelated with gaming, which could be conveniently summarised in his aphorism “architecture in the end is a multi-player game” (Oosterhuis, 2006), by investigating if and how he understands that relationship and which stages of the design process it appears to concern. In relation to the design process, Oosterhuis seems to reject the human-centric approach, as well as notions of representation-simulation of the actual. In addition, he does not accept views that regard the computer as a mere “aid” to the human brain. The computer should not just undertake the task of representing a mental conception already and completely processed by the human brain that resorts to the computer only for purposes of its presentation. On the opposite, he argues that computer and architect are equivalent players in the game of design and they keep providing feedback to each other, in the form of a constant flow of data, during the design process as a whole. The architect thus enters a process of exchanging ideas with the machine, while taking advantage of its superior computational powers (Oosterhuis, 2006). This notion of his seems to be integrated within a web that is formed by some of his fundamental views on space and design, like the ones that follow and can be traced scattered around, yet constantly re-appearing in his theoretical texts of the last decade: Space and man are equivalent players; the architect should adopt both top-down and bottom-up viewpoints during the design process; form allows function (Oosterhuis, 2001); the architect sets the rules of the game, i.e. his job is to discover the rules that produce life, which is defined as the interesting result of a process of constant evolution (Oosterhuis, 2006). In Oosterhuis’ texts, the concept of gaming is mostly used to define the relation of the built product of architecture with its users. It is perceived as the form of communication and interaction of the building with its users in real-time –that is why architecture is defined as a multi-player game- within a wider perception of architecture, space and design as a constant flow (procession-exchange-output) of new data, which can in the end be reduced to a form of computation. Is the act of gaming relevant to the design process itself, though? Upon closer inspection, the way in which the concept of gaming in relation to the design process appears in Oosterhuis’ texts seems a bit opportunistic. At the end of the day, in Oosterhuis’ general context, ‘gaming’ could just sum up to a term that nowadays can conveniently delineate a certain design process he has been researching and experimenting with for a much longer period. It is possible to read his own attitude towards the architectural practice as a wider on-going response to Greg Lynn’s approach in Animate Form – a response that does not restrict the animation of form only to the domain of the design process, but most importantly extends it to the actual life-cycle of the building; that’s why Oosterhuis could in fact be evangelising the transition from the animate form to the animate body. However, such an approach, combined with his programmatic aim of incorporating the computer as an equivalent player of the design process game (as if building and designing were two processes that unfold in parallel, much in the way the virtual and the real nowadays do) leaves innate properties of the act of gaming out of focus of our research scope. Thus, they remain practically unexplored exactly at the moment when such properties, which can possibly inform the architectural design process through a different path, can already be discovered in today’s videogames. In spite of what he may initially seem to evangelise, then, Oosterhuis is in fact concerned with just one end of a (virtual) spectrum that would concern the adoption of gaming notions as tools of the architectural design process. The task of programming the structure of a game retains –without actually canceling- the
18
primacy of a top-down viewpoint during the design process. Should the architect be restricted to defining only the limits within which the form can oscillate, instead of ‘playing’ the game himself? By reserving the sole role of the programmer for the architect, Oosterhuis cannot grant him access to the design benefits an architect-gamer might be able to extract.
III. SPACEFIGHTER: TOWARDS THE ARCHITECT-GAMER It is quite clear that human beings have the unique ability to attain a viewpoint of the world in a detached way. It is this third-person/top-down viewpoint that seems to be of vital importance when architects have to make design decisions, as it enables them to model and manipulate the complex reality of spatial relations. This is why the architect-programmer enjoys his current status – and not only in Oosterhuis texts. Yet, it is also true that human beings that happen to be architects usually rely on spatial information gained through their irreducible first-person/bottom-up viewpoint as feedback for their third-person models of space. The inescapable importance of the bottom-up perspective in design became emphatically apparent in post-war architectural discourse, especially in the field of urbanism (Jacobs, 1961). This divided nature seems to be the source of many problems, that concern our view of the world as a whole and not just architectural issues, yet a contemporary design process should at least aim at the best possible reconciliation or integration of these two seemingly separate standpoints (Nagel, 1986). Through immersion within the virtual environment, an architect-gamer could be experientially incorporated within a bottom-up viewpoint during the design process. His aim would not then be to confirm the results he already anticipates out of a process he intends to define normatively from his top-down viewpoint of the programmer, but to investigate the possibility of enriching the field of that process in-the-making itself. MVRDV/DSD seem to have already made a step towards that direction with their Spacefighter project (2007). Spacefighter is a new kind of software. It forms a platform, constituted by an interconnected web of simpler design games, attempting to model the multiple levels of complexity of the contemporary design process in various different scales, based on a multitude of possible interactions of flows of information stemming from global and local databases. By ‘getting his hands dirty’ and actually playing the Spacefighter game, the architect can be educated in accurate decision-making and planning. By replaying the design game, he will soon discard mere intuition in favour of the development of strategies that produce the desired outcomes. The immersive act of gaming will have influenced his design methodology. His planning decisions in the virtual design field are in constant interaction with other games that run in parallel levels and influence other factors of the gaming field in a global or local scale, with the relevant information being constantly updated, thus influencing the current situation and consequent outcomes through both top-down and bottom-up information. However, Spacefighter is only a platform at the moment. Its structure forms a kind of docking bay for other software that can cooperate with (or expand) this existing platform and focus on a more specific scale or other characteristic of the design process. A further exploration of the innate properties of the act of gaming for the production of such software might well end up in the best possible level of reconciliation of our two main design viewpoints (third-person/top-down and first-person/bottom-up) so far. Thus, the next step of the study could require us to reach the other end of the pendulum, contrasting Oosterhuis’ anti-representational logic with a more “traditional” logic of simulation, a logic that is in the end both representative and human-centric. By intentionally moving the focus of our study to the agent acting within the space we design, we aim at embedding such kinds of logics that regard space as an object of rivaling practices of habitation under constant negotiation, in the design process itself. If space is no longer designed on the basis of the typical building program –that attributes elements of spatial identity through a normative logic of typical categorisation- but on the ground of space as a field of possibilities for diverse actions, we soon witness the emergence of the problem of representation of those possible actions within the frame of the proposed return to the logic of simulation. Videogames like Nintendo’s Majora’s Mask (2000) –whose basic idea stems from the film The Groundhog Day (1993), putting its protagonist in the awkward position of re-living a cycle of 3 days that repeat themselves in exactly the same way time and time again- allow us to investigate ways of simulating and representing actions that evolve in space and time as a structure that hosts the
19
actions of the avatar-agent. The logic of action of that agent within such a field of space-and-actions can endow a given spatial structure with new meanings, breathe life into its inactivated nodes and in fact lead to its radical re-definition by challenging the way it had originally been ‘programmed’ – resulting in fact in a re-negotiation and re-programming of the spatial structure itself. It is exactly here that the crucial point, that highlights the basic advantage of the adoption of the bottom-up viewpoint as an equivalent player in the design process, lies and shapes our first visions for the videogame as another tool of the design process for contemporary architects that could rise as both programmers and gamers of space design.
REFERENCES Actar (2003). A+a: Architecturanimation, Barcelona: Col.iegi d’ Arquitectes de Catalunya. Bourdieu, P. (2000). Pascalian Meditations, Stanford: Stanford University Press. Hubers, J.C., van Veen, M. & Kievid, C. (eds.) (2003). Game, Set and Match. Real-time interactive architecture, Delft: Delft University of Technology, Faculty of Architecture. Jacobs, J. (1961). The Death and Life of Great American Cities, New York: Vintage. Juul, J. (2005). Half-Real: Videogames between real rules and fictional worlds, Cambridge Mass: MIT Press Lynn, G. (1999). Animate form, New York: Princeton Architectural Press. MVRDV/DSD (2007). Spacefighter. The Evolutionary City (Game:), New York: Actar MVRDV (2005). KM3: Excursions on Capacities, Barcelona: Actar 2005 Nagel, Thomas (1986). The View from Nowhere, New York: Oxford University Press. Oosterhuis, K. & Feireiss, L. (eds.) (2006), Game, Set and Match II. On computer games, advanced geometries and digital technologies. Delft: episode publishers. Oosterhuis, K. (2003) Hyperbodies: Towards an e-motive architecture, Basel: Birkhäuser. Oosterhuis, K. (1998-2007). Theory texts retrieved from ONL official website: http://www.oosterhuis.nl/quickstart/index.php?id=45 Poole, S. (2004). Trigger-happy: Videogames and the Entertainment Revolution, London: Little Brown & Co. Publishing. Sallen, K. & Zimmerman, E. (2003). Rules of Play: Game Design Fundamentals, Cambridge Mass: MIT Press Santorineos, M. (ed.) (2006). Gaming Realities, medi@terra, 7.international art + technology festival, Athens: fournos.
20
INDEX_ MESH Hacking public spaces into common places Athina Stavridou Architect n.t.u.a., msc on architecture - space planning, school of architecture n.t.u.a. Teaching building construction as a 407 lecturer, since 2005 at School of Architecture N.T.U.A. Co-operator in postgraduate course on “Information Technology and Architecture”; from total design to global planning, N.T.U.A. School of Architecture, since 1999, Coordinator: Prof. D.Papalexopoulos. Design practice since 1995. http://www.ntua.gr/archtech, http://www.archsign.gr Sonia Tzimopoulou PhD in Architecture AUTH, MSc Virtual Environments UCL Diploma in Architecture AUTH Angela Kouveli Architect N.T.U.A., Mphil on architecture - space planning N.T.U.A. march IaaC (Institute of Advanced Architecture of Catalunya) Dimitris Psychogios PhD student (NTUA), Msc on architecture - space planning (NTUA), Architect (NTUA), Civil engineer (T.E.I Heraclion Crete), http://www.laba.gr, laba.psycho@gmail.com Dimitris Papadopoulos PhD student, School of Architecture, NTUA / MPhil: Architecture - Space Planning, NTUA / MSc: Adaptive Architecture and Computation, Bartlett, UCL / Architect NTUA http://www.dixtio.net Xenofon Papadopoulos PhD student, Dept. of Products and Systems Design, University of the Aegean Electrical and Computer Engineering degree, NTUA Software Development Manager, TEI of Athens Michael Georgiou Architect N.T.U.A, MSc: Adaptive Architecture and Computation, Bartlett, UCL
INDEX MESH www.ntua.gr/archtech/index_mesh A mesh, made tangible by its located nodes and weaved by their multiple, possible interconnections, is superimposed to a city map. This technological platform operates as the real-time interactive medium between the existing reality of the city and the ephemeral local narrations by transforming observed information into recorded knowledge and this knowledge into new information. The intention of our proposal is the creation of a constantly evolving space indexer, for an alternative study of the city: a study that utilizes the potentials of information technology and especially the networked interaction among physical locations. This study of the city leads to the creation of nodes, each one of them digital and physical at the same time, located and dispersed in the urban tissue, that encourage dynamic interactions in order to reconfigure the existing. According to linguistics and philosophy of language“, index refers to an indexical behaviour” (Wikipedia - index– 2009) or action.
21
Index demonstrates the now, here and we, that is the time, the place, and the people involved. On the other hand mesh refers to an ad-hoc network of which “all components can all connect to each other via multiple hops and they generally are not mobile” (Wikipedia - mesh networking - 2009) Index refers to the local / physical and mesh to the digital. Index mesh works like “a social indexicality that points to, and help create social identity” (Wikipedia - index– 2009), but only through social action that takes place in the city among different populations for a common purpose. Index mesh operates, one may say, like a density indicator, or even better like a temporal localization of activities.
OPEN SOURCE LOGIC The suggested network is considered as a continually evolving technological platform. Its software and hardware entity-interface is constructed in open source logic. All the possible risks emerging from such logic are considered as challenges for the design team and the users, proposing the activation of new forms of real time defense. In spite of accepting the given form and software interface as a completed entity, protected from fitted systems that need periodical regulation, the open logic calls for a constant vigilance of the users. It therefore shifts the responsibility from a central remote authority to dispersed-collective actions of temporarily common intention. The above notion is fundamentally based on the principles of the open source software movements. Open source software offers an alternative to the dominant commercial packages, by canceling the monetary value of a product and by providing free access to products of knowledge and technology. The open source logic by giving to the user the ability to hack the software, creates a new kind of exchange value, that of creativity. In such a way the software evolves only if the users want it to do so. The responsibilities and the benefits belong to the participants. Therefore we have chosen the programming language Processing in order to create the initial version of the software of the index mesh. As it is declared on the website: "Processing is a programming language, development environment, and online community that since 2001 has promoted software literacy within the visual arts. Initially created to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing quickly developed into a tool for creating finished professional work as well.” This programming language simplifies a variety of procedures in order to create visual forms, motion and interaction. It is addressed to people with few or even no previous programming knowledge without limiting the creativity of an advanced user. The open distribution leads to the creation of multiple libraries, that enhance the capabilities of the core language, and even to different versions to run on mobile phones or program microcontrollers. An open source programming language was a oneway decision in order not only to create but also to conceive the attributes of a common, open, constantly and unexpectedly evolving intensive mesh. Index mesh with its interactive constructions – nodes forces co-presence in an emergent network, and finally forms «the material construction of time simultaneity» (Ince, 2003) within a city. The nodes are “entities with a definite identity and yet not defined by an essence but by a process of emergence”. (DeLanda, 2006) As Delanda states “cities and nation-states must be viewed as physical locales in which a variety of differently scaled social actors (from individual persons to organizational hierarchies)carry on their day-to-day activities. A city, for example, possesses not only a physical infrastructure and a given geographical setting, but it also houses a diverse population of persons; a population of interpersonal networks, some dense and well localized, others dispersed and even shared with other cities” (Delanda – 2006 – p.260-261)
A CASE STUDY
22
All the above notions applied in a case study for a workshop that took place in Easa 2007 meeting1 in the city of Elefsina in Greece. Elefsina has a population of 25950 inhabitants and an area of 18.5 square kilometers. The city is composed by multiple layers of information (historical, political, social, etc.). To work in a urban tissue means to deal with non predetermined and dynamic behaviour of individuals in space. The physical constructions, each time intensively located, of common use where a human activity is developed, host five people at minimum and is perceived as a sign in the urban environment. Workshop participants are called to design the constructions-nodes. Each team needs to present the analysis of the selected city spots and the located interactions along with possible proposed ones. Each final structure will include an interactive technological platform. Based on the data analysis each team creates a remote structure, an object of reference that extrudes the perceived reality towards one or multiple enriched possibilities. The overlapping of these structures with the pre-existing, along with the actions from their interconnection deforms the mesh and indexes the city of Elefsina in an unexpected way. The analysis of the selected areas includes any form of data selection that raises participants interest. From public actions, everyday activities or local customs to embedded functional networks, historical objects, visual or subjective perception. An important component of the project is the presentation of the interaction of events in the local area each node occupies and the possible connections among the nodes themselves. Each construction will include a series of sensors, actuators, software and materials for example 1. A pressure pad (sensor) that triggers a projection of linear or non-linear frames which completes the hard copy presentation. 2. The material to create the "skin" of the construction. This could vary from plain paper, vinyl, plywood, fabric, or any material 3. A computer program written in Processing and an electronic Arduino Board that function as the interaction medium. The duration of the projection along with the position, the type of narration (linear or non-linear) and the material projected, along with the placement of the projector, the CPU and the pressure pad, are participants responsibilities and decisions.
OPEN QUESTION The case study opens a series of questions, that could be the object of a further research. How to design an interface that is capable to be transformed according to the desires and the needs of people and communities that each time use it? How to manage possible conflicts among different networks? How to manage the possibility of confronting a new unexpected situation other than the initial one? Finally the main question is how to think of an object, of a space, of a construction, of an activity, and at the same time design them while accepting the redefinition of their limits and constraints.
REFERENCE LIST Delanda, M. (2006). Deleuzian Social Ontology and Assemblage Theory. Uglsang, M., Sorensen, B. (eds). Deleuze and the Social. Edinburgh: Edinburgh University Press 1
http://www.easa.tk/
http://www.easa007.gr/
23
Batty, M. (March 07). Complexity in City Systems: Understanding, Evolution and Design. UCL Working Papers Series, paper 11. Retrieved from http://www.casa.ucl.ac.uk Castells, M. (2002), The rise of Information Society. Blackwell Publishing LtD. Castells, M. , Ince, M. (2003). Conversations with Manuel Castells. Polity Press Delanda, M. (2006). Deleuzian Social Ontology and Assemblage Theory. Deleuze and the Social Edinburgh. Edinburgh: Edinburgh University Press. Kolarevic, B. (2003). Architecture in the Digital Age- Design and Manufacturing. New York: Spoon Press. Leach, N., Turnbull, D. (2004), Digital Tectonics. Academy Press. Lim, Cj. (2006). Virtually Venice. London: The British Council. MVRDV. (2006). KM3: Excursions on Capacity. Actar. Oostehuis, K. (2003), Hyperbodies. Towards an E-motive Architecture. Basel: Birkhauser. Papalexopoulos, D. (2006). Design issues for architecture. Intelligent Environments 06. Proceedings. Institution of Engineering and Technology. p. 251-252 Picon, A. (2004). Architecture and the virtual: towards a new materiality. Praxis: journal of writing + building. N.6. p. 114-121 Thacker, E. (2004). Networks, Swarms, Multitude. Retrieved from http://www.ctheory.net http://en.wikipedia.org/wiki/Mesh_networking http://www.interactivearchitecture.org/ http://www.interaction-design.org/ http://robotecture.com/index.php?option=com_weblinks&catid=2&Itemid=4 http://www.ixdg.org/en/about_ixdg/what_is_interaction_design.shtml http://www.nafplio-tour.gr/ http://en.wikipedia.org/wiki/Mesh_networking Archtech: http://www.ntua.gr/archtech/ Materia: http://www.materia.nl/ MVRDV: http://www.mvrdv.nl/_v2/ ONL: http://www.oosterhuis.nl/quickstart/index.php Processing: http://www.processing.org/ Transstudio: http://www.transstudio.com/
24
ASPECTS OF VIRTUALITY: A SPACE OF CONVERSIONS Tzimopoulou Sonia Dr Architect AUTH MSc Virtual Environments UCL soni@ath.forthnet.gr
ABSTRACT This paper presents a part of the problematic presented in my doctoral dissertation, entitled "The Architectural Syntax of Virtual Space", at the School of Architecture at the Aristotle University of Thessaloniki, supervised by professors A. M. Kotsiopoulos, D. Papalexopoulos and S. Zafiropoulos. An underlying issue addressed in the doctoral thesis was how we could describe the conversions of the notion of virtual space through a relatively short time period. In the beginning of the thesis the transformation of the notion of virtual space is described as a synthetic approach of literature review. A shift in the perception of virtual space is observed: from its conception as an autonomous reality to the merged reality of physical- digital. In the nineties, there was an overwhelming enthusiasm for virtual reality as the materialisation of a distinct autonomous reality. People thought that virtual reality would transform their conception of the world, as well as their conception for the notion of being. The increasing power of information technology would promise to create simulation environments, novel worlds that will be neither real nor imaginary but somewhere in-between: virtual. Virtual space was not contained or confined by the physical laws of the real. This view was crucial for the formation of the notion of "cyberspace" in the nineties. It is synthesised from a body of references from Baudrillard, Virilio, Wertheim and Stephenson. The world of cinema and literature of this period, provide feedback to the new emergent kind of space that would promise a new reality, not material and in parallel to the existent reality. An emerging kind of literature called "cyberpunk", through writings of Gibson and Stephenson, described the new world of virtual reality and how it is related to physical space. On the opposite side lie the writings of Deleuze, LĂŠvy, Massumi, Rajchman, Picon and Papalexopoulos that guide us towards the definition of "virtual" as that, through which the potential or force becomes effective. This view has its routs in the philosophy of Aristotle who -unlike Platoconsidered the "ideas" not as subsistent realities but as internal forms of things. "Virtual", adds to but does not replace the real, the possible and the actual. In this paper, aspects of virtual space through the previous two decades are unfolding, as discussed in my doctoral dissertation. As a result a scheme of analysis and evolution of virtual space occurs that will be presented extensively.
1. THE TRANSFORMATION OF THE NOTION OF VIRTUAL SPACE Architects are familiar with the notion of space, as it is originally perceived, as a tactile space. The notion of "virtual" needs to be clarified as there are different views about it that are not necessarily similar to each other. This paper discusses a concept presented in my doctoral thesis "The architectural syntax of virtual environments"(2006), referring to the transformation of the notion of virtual space, through the last two decades. This transformation appeared to be a guiding issue in the thesis as, through the course of work was redefining the theoretical preliminaries concerning the principle hypothesis related to the architectural syntax of virtual space.
25
Early on, virtual space was mainly described as interactive systems of three dimensional digital representations that allowed movement in real time. This view is widely spread to programmers and virtual reality researchers. According to Nicholas Negroponte(1995), virtual reality is "the artificial world of three dimensional images that creates the computer and to which the user can participate and interact with the elements that consist that "world", by using his senses"(p. 126-127). In the thesis, through literature review, it is pointed out that there was an overwhelming enthusiasm about virtual reality as the incarnation of a separate autonomous reality in the nineties. The basic concept was that virtual reality, would reform not only our view about the world, but also our view about people and the world we live into. It referred to the augmented power of information technology to create simulation environments, new words that are neither real nor imaginary but somewhere in between: virtual. In the nineties, cyberspace was perceived as a utopian space where laws of physics and gravitation are violated, and a space where the imaginary can be projected. The word "cyberspace" was first coined by science fiction novelist William Gibson in his novel "Neuromancer"(Gibson, 1984), and was widely spread and influenced technology related to virtual reality. Cyberspace was presented as "a consensual hallucination experienced daily by billions of legitimate operators, in every nation... A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding..."(p. 67) The writings of Baudrillard and Virilio are characteristic of this era. Their view expressed the distant desire of Plato for the "transcendent" world of soul. Baudrillard(1983) refers to the notion of "simulation" and "hyper-reality". For him the simulated reality is a hyper-reality, where it is impossible to detach reality from imagination, the real and the non-real. The simulation is the process where the representation of things replaces the things that are represented. In other words the representations are more important then the "real". Virilio(1994), argues with Baudrillard in the issue of simulation and instead of the word simulation he prefers the word substitution. He believes that there is not one reality but two: the actual and the virtual. He supports that the world would live in two realities and two kinds of time. "Reality has become symmetrical. The splitting of reality in two parts is a considerable event which goes far beyond simulation."(Virilio, 1994). The world of literature and cinema feed the enthusiasm for the new emerging kind of space, that promised a new immaterial reality, in parallel with the existing one. The genre of literature, that is called "cyberpunk" with characteristic representatives the writers William Gibson (1984) and Neal Stephenson(1992), described the new world of virtual reality and how it is combined with physical space. The research for the properties of the "new" space was described by Michael Benedict(1991) in his book "Cyberspace First Steps", where he presented different views from diverse scientific backgrounds for the description and the comprehension of the cyberspace phenomenon. In the same period and towards the direction of the connection of cyberspace to architecture it was edited the "Architects in Cyberspace"(1995), where were gathered theories and digital works of architects. Another characteristic book is "The Virtual Dimension: architecture, representation and crash culture"(1998). Other views that support the direction of transcendence were presented in an article of Margaret Wertheim(1999 and 2001) entitled "The Medieval return of Cyberspace", and in her book "The pearly gates of Cyberspace". On the opposite side lie the writings of Deleuze, LĂŠvy, Massumi, Rajchman, Picon and Papalexopoulos that guide us towards the definition of "virtual" as that, through which the potential or force becomes effective. This view has its routs in the philosophy of Aristotle who -unlike Platoconsidered the "ideas" not as subsistent realities but as internal forms of things. "Virtual", adds to but does not replace the real, the possible and the actual. The notion of the virtual that escapes from the idea of representation and guides towards processes of creation of new "objects", is pointed out by D. Papalexopoulos(2002).
26
The notion of virtual, not as iconic, but as what contains force was referred by Deleuze in "Difference and Repetition" and adopted by P. LĂŠvy(1995). LĂŠvy, refers to the idea of virtual as a distinct way of being and as a process of transformation of one way of existing to another. According to him, virtual is not close to iconic, to false, to imaginary or to illusionary. Virtualisation is considered as a process of evolving from the actual to virtual. Close to that direction lies the view of Rajchman(1998), that argues that "the actual is what manifests and effectuates the virtual, but the actual never completely shows or activates all that the virtual implies"(p.115). According to Rajchman, "virtual" introduces a new way of thinking: "it mobilises as yet unspecifable singularities, bringing them together in an indeterminate plan"(p.115).
2.THE MERGED REALITY OF PHYSICAL AND DIGITAL SPACE The idea of confronting digital and physical space as one, emerges, if we follow both theoretical research and more recent examples from the field of architecture. In this paper, we will be briefly illustrate this point, by unfolding basic references related to this field. Digital space is in close relation to physical space. A key point -that was also proposed in the thesis "The architectural syntax of virtual environmnents(2006)"-was that the syntax of virtual space influences the syntax of space as we traditionally know it. If we are to observe the evolution of the view that does not consider virtual and physical as two distinct realities, we can refer to Lev Manovich(2002). He argues about the notion of "augmented space" that is defined from multiple levels of information in the physical space. He mentions the applications of Ubiquitous Computing, Augmented Reality, Tangible Interfaces, Wearable Computers, Intelligent Buildings, Intelligent Spaces, Context-aware, Smart Objects, Wireless Location, Sensor Networks, E-paper that are placed under the "umbrella" of augmented space. He supports that architects have the potential to think the material architecture in combination with the new immaterial architecture of the information flow, as a unified whole. Close to this direction, lies the view of Peter Anders(1999) that coined the notion of "cybrid". This is a hybrid space where cyberspace co-exists with physical space. Marcos Novak(1998) is extensively referring to the process of "eversion" as the opposite process of "immersion". Gerard Schmidt(1999) created the phrase "information architecture". He presents information as new material for architects and as an extra dimension of architecture. Massumi(2005) argues that few years ago virtual reality was identical with an artificial reality. Now when we say virtual reality the emphasis is given to "reality". As Papalexopoulos pointed out "We observe that the digital dimension is involved directly with the "physical" in a series of hybrid conditions"(Papalexopoulos, 2005). A new materiality arises as part of this relationship. "It is observed a constructive-structural turn of the "digital", that has abandoned the quest for iconic spaces and demands the undisrupted connection with the materiality of objects"(Ibid). With research for design of space related to the notion of virtual, is extensively involved the Postgraduate Course on Architecture and Information Technology, of the School of Architecture, NTUA, co-ordinated by D. Papalexopoulos(1998-now). A characteristic example was the 3rd award to the international competition of Acadia for "The Library in the information age"(Papalexopoulos, D., Stavridou, A. et al., 1998). This example is crucial as early on (1998), first referred to the hybrid condition between virtual and actual in terms of space and proposed a system for the design of library where physical and digital merge. The issue of the merging of physical-digital, appeared again, that time as a main issue and question in theme of Acadia competition in 2002. Another awarded example in the field of information architecture referred to the interaction in a hybrid space of an information environment, as presented in a private home environment(Sinas, G., Tzimopoulou. S., 2002) What is more recently adopted, is the integration of ambient intelligence to architectural objects. "The autonomy of digital space is fading out, permitting a novel conceptualisation of the familiar to the
27
architects, "hybrid"", as stated by Papalexopoulos (2006, p.251). As it is further pointed out AmI components, distributed and located are defined as discrete elements. "Interaction design is thus a strong issue for architectural thinking, research and practice, linked to the design of physical/digital dispositifs, prompting but defining in details the activities sheltered"(Ibid). This new perspective leads to a new materiality as it integrates AmI technology in the materials of the construction. "Far from being geopardised by the generalisation of the computer and the development of virtual worlds, materiality will probably remain a fundamental feature of architectural production. ...Beyond perception, our everyday gestures and movements are indebted to our machines and their specific requirements. In such a perspective the impact of the computer may more accurately be described as a reshaping of, rather than an estrangement from physical experience and materiality".(Picon, 2004). To make the correlation of the above with the syntax of architecture a diagram is used. The attempt to use notions from the syntax of space to digital space is strongly influenced by the transformation of the notion of virtual as described in the previous section. The relation between these key concepts is illustrated in the diagram below. The syntax of architecture traverses both physical and digital space, indicating bridges between the syntax of both spaces. "Virtual" appears to go through both physical and digital space. The common ground between the two evolved from the notion of "hybrid" space to a "new materiality". This notion arises as fields of research related to the design of seamless interfaces acquire a density, and support the exchange of information between people, digital information and physical environments.
1 .Diagram produced by the thesis
28
REFERENCE LIST Anders, P. (1999). Envisioning Cyberspace: Designing 3D Electronic Spaces. McGraw-Hill, σ.193209 Baudrillard, J. (1983), Simulations. New York: Semiotext(e) Beckmann, J. (ed.) (1998). The Virtual Dimension: Architecture, Representation and Crash Culture. New York: Princeton Architectural Press Benedikt, M. (1991). Cyberspace First Steps. MIT Press (1992) Deleuze, G. (1968). Difference and Repetition. New York: Columbia University Press (1994) Gibson, W. (1984). Neuromancer. London: Voyager Lévy, P. (1995), Realité Virtuelle(in greek), Athens: Editions Kritiki (1999) Manovich, L. (2002). The Poetics of Augmented Space: Learning from Prada. Retrieved from http://www.manovich.net/ Massumi, B. (2005). Transforming Digital Architecture from Virtual to Neuro. Interview to Markussen Thomas & Birch Thomas. Retrieved from http://www.intelligentagent.com Negroponte, N. (1995). Being Digital. (transl. in greek). Athens: Editions Kastanioti. Novak, M. (1998). Transarchitectures and Hypersurfaces. Operations of Transmodernity, Architectural Design: Hypersurface Architecture, Vol. 133. London: Academy Group, p.86 Papalexopoulos, D. (2002). The re-definition of the architectural reference to technology. "Virtual"and "actual"(in greek). Postgraduate course on architecture and information technology. School of Architecture. NTUA. Retrieved from http:// http://www.ntua.gr/archtech/lessons/dc/virtualactual-zenetos_files/virtual-actual-zenetos_files/frame.htm Papalexopoulos, D. (2002). Informal city(in greek). Architects: www.arch.art.gr 35/2002. Athens: Ekdotiki 3D. p.57 Papalexopoulos, D., Stavridou, A. et al. (1998). Library for the information age. 3rd prize Acadia Competition. Retrieved from http://www.acadia.org/competition-98/winners.html Papalexopoulos, D. (2005). The representation of the continium. Design - Construction – Use. The representation as a vehicle of architectural though. Volos. p. 95-102 Papalexopoulos, D.(2006). Design issues for architecture. Intelligent Environments. IE 06. Institution of Engineering and Technology Pearce, M & Spiller N. (eds) (1995). Architectural Design: Architects in Cyberspace, Vol. 118. London: Academy Group Ltd. Picon, A. (2004). Architecture and the virtual: towards a new materiality. Praxis: journal of writing +building. N. 6. p. 114-121 Rajchman, J. (1998). Constructions. MIT Press (2000) Sinas, G., Tzimopoulou, S. (2002). My_Space_Invaders. Honorable Mention in Professional Category/ Information Architecture. Proctor, G. (ed.) (2002), Thresholds. Design, Research, Education and Practice, in the space Between the Physical and the Virtual, Proceedings of the 2002 Annual Conference of the ACADIA. US. Retrieved from http://www.acadia.org/dde/links/launch/D_207.html Stephenson, N. (1992), Snow Crash. New York: Bantam Tzimopoulou, S. (2006), The Architectural Syntax of Virtual Space. PhD AUTH. Thessaloniki Wertheim, M.(1999). The pearly gates of cyberspace. A history of Space from Dante to the Internet. New York: W.W. Norton & Company Werteim, M. (2001). The Medieval return of Cyberspace (transl. in greek). Futura 7: Digital World. Athens: Futura, 223-236 Wilson, L. (1994), Cyberwar, God And Television: Interview with Paul Virilio. Retrieved from http://www.ctheory.net/articles.aspx?id=62 http://www.media.mit.edu/research/
29
VIRTUAL DESTRUCTION: THE EROSION OF TRADITIONAL TEACHING AND LEARNING PROCESSES? Dr. Simran Grewal University of Bath School of Management Bath, BA2-7AY, UK. Abstract Statistics shows that the current generation of students are increasingly reliant on Internet technologies as a learning resource, which raises some important questions. If students are increasingly being exposed to Internet based technologies, how will these technologies affect the processes through which students learn, and as instructors how will it affect the methods that we use to teach them? This work in progress paper begins to address these questions by exploring the impact of emerging Internet technologies on traditional teaching and learning practices and draws upon a practical case study conducted in Second Life to illustrate the ways in which these technologies are influencing pedagogical practice. Introduction All around us we are beginning to see evidence of the way in which emerging Internet technologies are beginning to erode more traditional ways of teaching and learning. For instance, many university classrooms are now equipped with network facilities providing course instructors with live access to Internet based resources. Live network access offers an opportunity to access a wider range of resources in a more efficient way. One such tool which is being increasingly used in management education is YouTube. This website provides links to video clips which serve as a useful aid in illustrating contemporary case study examples of work based issues and scenarios, and helps students to make sense of abstract theoretical constructs. Consequently, less reference is being made to traditional text book resources which date very quickly. Another example of how emerging Internet technologies are eroding traditional teaching and learning practices is through the use of virtual learning environments such as Moodle, WebCT and Blackboard. Many courses across the higher education sector are now supported with e-learning resources, which provide students with additional support material including lecture slides, links to readings, communication forums and assessment tools including quizzes and assignment submission functions. University libraries are providing wider access to electronic journals which reduces the need to stock hard copies of journals whilst allowing more efficient remote access to academic resources. Increasingly, students are turning towards the Internet as a resource for information. Google and Wikipedia have now become powerful as first points of reference for students seeking information about concepts and issues related to course topics. Despite such advances, Prensky (2001) alerts us to the disruptive potential of emerging Internet technologies on the current generation of students. He argues that this group constitutes the first generation of students to grow up with the Internet, and they will have spent their entire lives exposed to computers, videogames, digital music players, video cams and mobile phones. For instance, he notes that the current generation of university students in the USA have spent less than 5,000 hours of their lives reading, yet in excess of 10,000 hours playing video games. Of course, there may now be an even greater divide in these statistics given that the research was conducted 8 years ago. The way in which the Internet is eroding traditional learning methods is not country specific. In the last few years, research conducted by the European Interactive Advertising Agency (EIAA, 2005) highlights that the current generation of European students are dedicating a greater percentage of their time to Internet related activities including information gathering, online gaming and online chats as opposed to watching TV, talking on the phone and reading newspapers and books. The data show that 46% of 1524 year olds are watching less TV, preferring instead to browse the Internet. It is also increasingly common for people to ‘watch’ TV while simultaneously using their laptops to carry out multiple IM conversations and surf the Internet. A literal interpretation of these statistics shows that this generation
30
of students are increasingly reliant on Internet technologies as a learning resource, which raises some important questions - if students are increasingly being exposed to Internet based technologies, how will these technologies affect the processes through which students learn, and as instructors how will it affect the methods that we use to teach them? This work in progress paper begins to address these questions by exploring the impact of emerging Internet technologies on traditional teaching and learning practices and by drawing upon a practical case study conducted in Second Life to illustrate the ways in which these technologies are influencing pedagogical practice. Prensky (2001) begins to address the first part of this question by making a useful distinction between the current generation of students, whom he describes as ‘the digital natives’ and their predecessors ‘the digital immigrants’. He suggests that like natives the current generation of students have grown up and been exposed to all things digital and can therefore speak the ‘digital language’, whereas the digital immigrants were not born into the digital world but were exposed to it at a later stage of their lives. Therefore the way in which they think and process information will be significantly different. For instance, digital immigrants will probably have spent many hours locked away in a bedroom revising for exams from textbooks and classroom notes with minimal distraction, whereas in stark contrast the digital native will probably supplement the textbook and class notes with a laptop screen displaying a number of tabbed pages providing multiple sources of information. This is often carried out in tandem with checking emails and updates of friend profiles on Facebook whilst downloading the latest iTunes onto an iPod, the epitome of multi-tasking. The outcome is that the current generation of students are able to absorb information quickly and from multiple sources, more easily adapt to changes and have amazingly flexible minds. This means that they assume a ‘process’ rather than a ‘content’ view to problem solving and searching for information. Accordingly, there is a greater focus on the development of skills towards problem solving in a world where information is abundant, rather than memorising a contained amount of tutor-directed content. (Tapscott, 1998) The following table outlines the key distinctions between the digital natives and the digital immigrants.
Table 1: Natives versus Immigrants
DIGITAL NATIVES
DIGITAL IMMIGRANTS
Like receiving information quickly from Like slow and controlled release multiple sources. information from multiple media sources Like parallel processing and multi-tasking
of
Like singular processing and single or limited tasking.
Like processing pictures, sounds and video Like processing text before pictures, sounds before text and video. Like random access multimedia information
to
hyperlinked Like to receive information linearly, logically and sequentially.
Like to network with others
Like to work independently
Like to learn ‘just in time’
Like to learn ‘just in case’
Source: Adapted from Times Online (2008) Multiple Processing of Information What is the impact of the type of multiple processing of information identified above on the learning process? Mounting evidence suggests that although these virtual technologies are making the information gathering process more efficient by making it simpler to collate information, there is a real danger that processing information in this way can lead to ‘skimming’ or a surface approach to learning. Consequently, there is a real need to re-visit our current teaching styles and make sure that they are aligned with changes in the ways that students are processing information, or we are in danger of creating a significant divide between teaching methods and learning styles. In particular, we need to
31
think about ways of developing more specific ways of encouraging a deeper approach towards learning. Prosperio and Gioia (2007) have identified the danger of this mismatch between the ways in which information is processed across generations in the context of the teaching and learning process. What they highlight is the evident risk of creating a divide between learning styles and teaching practices. They suggest that technological developments based on ‘out of classroom’ media have meant that 15-24 year olds are (typically) more familiar with Internet-based technologies than earlier generations. They term this demographic group the Virtual Generation [or V-Gen] as their increased exposure to ‘out of class’ virtual media is likely to influence their learning style. As noted earlier, Prensky (2001) also alerts us to the widening gap between today’s university students and their teachers, the so called ‘Digital Immigrants’. He argues that instructors need to tailor their teaching to match the skills, experiences and expectations of their ‘digital native’ students. All too often teaching styles tend to lag a generation behind as lecturers tend to adopt styles that are consistent with their own learning styles, based on the familiarity of media they have been exposed to in their formative years. Therefore, when teaching the current generation of students we need to be sensitive to changing learning styles, while at the same time remembering that such changes are not distributed uniformly across all members of this generation, and that students will exhibit a range of aptitudes for new technologies. The Pedagogical Potential of Second Life A good example of Web 2.0 technology which has the potential to encourage a richer approach to learning, supports the development of process-based skills, and facilitates productive enquiry is Second Life. Media hype suggests this virtual parallel universe has the capability to transform social, organisational and pedagogical practice as they exist in real life. However, Second Life is not without controversy. In the last year a number of unsavoury incidents raise the question of the appropriateness of adopting Second Life as an educational platform. For instance, media coverage recently reported Second Life as a breeding ground for terrorist activity with militants using this virtual world to hunt for recruits and mimic real-life terrorist activity against avatars and buildings known as ‘griefings.’ These virtual atrocities in Second Life are being used to facilitate the formation of a community of extremists (Guest, 2007). The ability of avatars to engage in promiscuous and sexual activity raises further questions about the appropriateness of using this virtual platform for educational purposes. Yet, the existence of Ivy League academic institutions in Second Life such as Stanford, Harvard and Princeton University implies a degree of educational merit. If we want to engage with the current generation of students there is a need to design teaching activities through media that these students are familiar with. Second Life offers the potential to develop problem solving skills through simulation activities and provides an environment where students can engage with other learners through interaction with other students in a 3D environment. If an activity is designed constructively this can provide a rich learning experience for the student. In management education one of the key topic areas often studied is group decision making (Kolb, 1999). Second Life offers an ideal simulation environment from which students can experience the group decision making process via a dynamic medium that they are familiar with. For instance, in 2008 a pilot group-decision making activity was designed and implemented in Second Life for 200 first year undergraduate students on an organisational behaviour course at the University of Bath to assess the pedagogical suitability of this platform for management education. The rationale for designing this activity was for students to experience conflict and negotiation in the context of group decision making. The pedagogical activity was structured on a role play scenario in a hypothetical organisation in which students were to participate in a virtual group meeting. Students were allocated pre-defined conflicting roles on an executive board consisting of 5-6 members and expected to negotiate a change proposal. To enhance the experience of conflict, the use of non-verbal communication channels were restricted to only allow the use of chat tools to obstruct the flow of conversation and limiting the meeting time to 20 minutes. Forty virtual meetings were conducted over a 2 week period. This activity served as a useful exercise in how as instructors we can harness the capabilities of web 2.0 technologies to constructively align our teaching methods to student’s learning practice and provide students with a dynamic and rich learning experience based on the type of media they have been exposed to. Although the activity was successful in simulating the issues involved in the group decision making process, feedback revealed that some of the issues involved in the design and
32
implementation of the project influenced student opinion of the learning value of such an activity, as the majority of students commented on the practicality of engaging in the project. However, the data generated from the study reveals a significant display of conflict and negotiation behaviours during the course of virtual meetings. Something that was not immediately apparent to the students. A further issue that emerged was language barriers and this affected the participation levels of certain cohorts of students. The international student cohort constituted 25% of the course and out of this percentage approximately 13% were students whose first language was not English. Given that the primary language for communication was English, this resulted in this group of students being slightly slower to respond and contribute towards the discussion. Subsequently, this led to lower levels of chat contribution during the virtual meetings. Maintaining flow of conversation was difficult, which resulted in students having to type extremely fast or simultaneously. Often the flow of conversation was difficult to follow due to the pace at which the meeting progressed; this often led to instances of miscommunication. Nevertheless, this very experience brought to the surface a very useful example in inner-group dynamics and highlighted contemporary issues in working with diverse groups. Conclusion Emerging Internet technologies are permeating through our social and organisational lives and a consequence of these technological developments is the way in which they are eroding traditional learning styles, as evidenced by changes in the ways that the current generation of students learn. Web 2.0 technologies though they appear disruptive to traditional learning approaches offer the potential for students to develop process-based skills through media that they are familiar with. This allows them to engage with the technology and can lead to a richer learning experience. Nevertheless, the level of integration and degree to which the use of the technology is aligned with the learning objectives of the course will be critical to its success. As such there is a need to reassess our teaching methods in order to engage with the current generation of students, and in practice design our teaching to match students learning styles so that there is a greater level of constructive alignment between the two. At a more intrinsic level we need to consider the consequences of such developments. The technology facilitates multiple and dynamic processing of information and consequently, this may be causing a reduction in learners’ attention spans, resulting in a surface approach to learning, particularly if there is increased exposure to the technology. Clearly further research is required to ascertain whether these technologies do indeed contribute towards a surface approach to learning. References Appleyard, B. (2008) Stooopid…why the Google Generation isn’t as smart as it thinks, Times Online, July 20th 2008, available at http://www.timesonline.co.uk accessed on 30 July 2008. European Interactive Advertising Association (2005). Available at http://www.eiaa.net/news/eiaaarticles-details.asp?lang=1&id=66 Accessed on March 27 2008. Guest, T. (2007) Second Lives: A Journey Through Virtual Worlds, Hutchinson. Kolb, J. (1999) A Project in Small Group Decision Making, Journal of Management Education, Vol. 23, No.1, 71-79. Prensky, M (2001) Digital Natives, Digital Immigrants, On the Horizon, MCB University Press, Volume 9, No. 5. October. Prensky, M (2001) Digital Natives, Digital Immigrants, Do They Really Think Differently? On the Horizon, MCB University Press, Volume 9, No. 6. October. Proserpio, L and Gioia, D.A. (2007) Teaching the Virtual Generation, Academy of Management Learning and Education, Vol.6, No.1, 69-80. Second Life http://secondlife.com/whatis/economy_stats.php Tapscott, D. (1998) Growing up digitally: The rise of the net generation. New York: McGraw-Hill
33
OBJECTS IN MIRROR ARE CLOSER THAN THEY APPEAR: A BRIEF REFLECTION ABOUT THE SCREENS AND AN INSTRUCTIONAL METHODOLOGY FOR WRITING DIGITAL IMAGES Mariella Azzato Sordo Simon Bolivar University mazzato@usb.ve Venezuela
Cristian Alvarez
Simon Bolivar University cristian.alvarez86@gmail.com Venezuela
1.
abstract
Object is everything that focuses our attention on a particular time. Our study is based on all those objects that form the image in front of us to be seen and read on screen or digital presentation. We start from the assumption that the image redefines its nature when it is adapted on technological support to be shown for, and it requires and admits the formal changes that allow adapting it to the new digital medium. In this sense, it is logical to think that if the forms are reconfigured in a process of "remediation" as it has been described by Bolten & Grusin (1999), the action of how the image is composed (writing) and how the image is perceived (reading) also change to adapt to the evolution of digital technologies. Then we as educators make a question: How can interest us this reflection about the developments that have taken the image in the digital world, if our educational culture has been based primarily on the oral presentation of content and has not exploited the opportunities that codes graphics and digital displays may have in the reading and writing of instructional message, rather than to view the image like a simple illustration? The reason could be linked to the results of the works proposed in the last thirty years by authors as Busquets, Ll. (1977), Rodríguez Diéguez, J.L. (1978), Aparici, R. y García-Matilla, A. (1978), Costa, J. y Moles, A. (1991), Vilches, L. (1995), Costa, J. (1998), and more recently the work developed by Clark, R.C. & Lyons, Ch. (2004). In these works authors emphasize, of course, the importance of image in the processing and the understanding of educational content, but they do not provide the image beyond the visual medium that accompanies or replaces information. Even though these investigations can describe the function of various classifications of the image, they are concentrated mainly in the use and the study of this, however they do not consider the relationships they have with their support of presentation. This study shows some examples and analysis of instructional screens that arise in the new reading and learning.
34
2.
analysis of reading and writing the digital image
Traditionally the image appears only on related functionally with the basis of any content, and so the use of medium invites to see the image with its context. For instance, according to Clark & Lyons (2004) the image can be considered, in the listing of the functions of communication (table 1), as: decorative, representational mnemonic, organizational, relational, transformational and interpretative. In this traditional vision, we can say, the unique relationship between image-object and its support is to see the medium just containing like a receptacle, and no other options that provide a learning of reading beyond the text that is accompanying the image. Such is the case of the example given by these authors to show an image of which use is mnemonic (figure 1).
Table 1. Communicative functions of the images. Extracted from Clark & Lyons (2004). This figure shows various graphics and texts that seek to relate the classification of the capitals of Greek columns with other objects of easy to recollect. As we know the mnemonic function tries to mimic with an element or word as closely as possible to their real object, therefore the context and direct references should appear in the image proposed. However, according to our point of view, if we look the details of the work of these authors over, we see that the image is consisting of elements of poor representation; direct references to the subject and the contextual elements are absent. So the result is a reduced and limited vision that frames the instructional visual problems to read the image as a simple addition or replacement of contents. Thus, when we come across some instructional screen, and this is the reason that justifies the title of our discussion, the reading of the digital image can not be reduced to the mere perception of objects on the screen or the mirror, because we run the risk of getting a wrong reading, in the same
35
way as with educational materials and incomplete reading of its contents do. The graphics and the screen become a complex image (Catalรก, 2005) on which we can not only read the graphic codes, but also the visual architecture that shows the relationship of all these elements with the stand that show: the screen. The form as it is written in the digital area of the screen tells us things, and that significant form is based on the architecture that results from the digital presentation. This is the approach to be of interest to educators. So, how can we foster and make better use of the forms shown by the digital display to compose messages that promote new readings and hence new learning?
Figure 1. Mnemonic image from the book Graphics for learning. Page 21 (Clark & Lyons, 2004) We begin by describing the example shown in the figure 1, which, as we have said, poses a simplistic formula of the target graphic communication. This screen displays six graphics and six textual elements arranged sequentially and divided into two areas. The characteristics of stroke, texture, color, scale, and positioning of each element allows us to analyze and assess the quality of the visual message. This is not an aesthetic problem: it is to take the place of someone who is receiving the image. Firstly, returning to the example, we can see that the absence of a real object and of the context in which the subject develops, hinder to make use of previous knowledge that can foster, enhance and promote meaningful learning. What hope then to make the long-awaited transfer if the components used and the visual approach limit their opportunities? Consider then a contrary example based on the figure 1. The screen shown in the figure 2 takes again from the above conceptual graph all its elements, including the size of the screen presentation. However, new considerations have been incorporated into the formal programming and configuration of the display based on our proposal for reading and writing of digital images that we will elaborate with more detail below. The display that we offer is an alternative sample, for instance, to the issue in its real context. It is the Greek culture; therefore elements related directly to the historic spatial moment of the content that we wish to explore must be present. In the case of the figure 1 does not exist any element that refers to the general topic on which each element is participating. It ignores any possibility of establishing relationships that can help make other readings beyond the mnemonic function that the main aim of this screen. Furthermore, in our alternative proposal we looked for real images of each one of the spatial relationships for Greek capitals with their image shown. This allows a direct link between real and figurative. So from this point on any figurative picture refers to one of the capitals will have its reference of memory in the real object. Similarly, on each of the figurative images in orange have been placed formal references that define each capital as well as the text with the common mnemonic
36
standard: door, horn and crown. Analyzed in this way, reading of this screen not only offers a real opportunity to establish the mnemonic relationship of displayed content or to obtain simple information, but also the opportunity to reflect on the ways in which the visual composition builds on. To interpret for example the use of transparencies or color and texture absent in those in which the warm tone combines the detail of each capital. Or the different scales for each item placed on back to the real object and the object appeared. Finally, there are many other readings that can arise from the proposed image.
Figure 2. Example of the image based on the mnemonic display of Figure 1. Azzato (2009) It is important to say that our goal is not to decide which screen is better or worse. Both of them are valid, both of them put us in different educational contexts. The reason to raise the issue is that the function that the image has a few years ago is not the same with the current digital media presentation. We say at the beginning that hand in hand with the new technological developments, new visual conformations have come; new ways of seeing the world through complex images that are displayed in digital displays. This means that we have new opportunities for instructional writing and reading of the image. Then we must be sensitive to this fact and we must know the new design considerations we suggest that they are pondered at once of writing the digital image. That is why in the next section we will sketch a methodology for writing and reading of digital image.
3.
Methodology for writing the digital image
We consider the digital image as a microworld in which different levels of information coexist together (figure 3). The digital screen disappears as a simple representational medium for displaying the image of the product of significant configuration of each one of the levels that contain the graphic element. The digital image is then the result of multidimensional specificities beyond putting graphic elements in a space of presentation of the subject to be treated. On the contrary, it tries to give a meaning to each one of the separate levels of information, setting the position, scale, shape, texture, color of each graphic element, and the relationships of the proportion and the distance between them and the limits of the representational space. So each element has a programmatic load frames at each level that allows to coexist with other graphic elements and to shape the visual communicative configuration.
37
Figure 3. Conceptual approach of the digital image. Azzato (2008). But, how do we write this image? We will begin by illustrating the methodology carried out with the previous example. The image shown in figure 3 is part of an educational project that aims to develop digital contents for mathematics, specifically the issues of factoring, logarithms, vectors, geometry, equations, proportionality, etc. So we were called to write the digital image that would serve to introduce each program. The example shown is the image that we proposed for the topic of trigonometry. But, what was the process that we follow to reach this result? The following chart shows the suggested sequence for writing the digital image:
38
Figure 4. Methodology for writing the digital image. Azzato (2009). Firstly we ask: What is the topic and, which are the elements that constitute its context? In our case, and as we explained previously, the topic of the figure 3 is trigonometry. After defining the item to be represented, we must look for its definition. We have searched the definition of elementary trigonometry such as: mathematical equations that reflect the relationships that are the angles and sides in the triangle. With this brief definition, we have sufficient information with which we can work. We highlight in the prior definition the words that we consider key to talk about trigonometry, for instance: math, triangle, angles, sides, equations. However, once selected keywords, they should be analyzed etymologically to get other tracks of information that makes us develop further in the concept, such as: math, a word refers directly to the numbers. It would be very easy to choose an image that contains numbers; it would suffice to make up the desired visual composition. This is not correct. The reason is that traditionally we seek direct reference to the text in the image and thus condemn the opportunity to view other graphic element that allows us to refer to something more than reading. Then, what to do? Where we get started? First, we must understand that our job as writers of the digital image is to create needed significant levels for the visual composition of the resulting screen, and this can promote a cognitive change. If so, each one of the graphic selections that we do must remit not only to the item itself, but also to other areas that, even though are not directly linked, may extend visual understanding.
39
In our case we talked about the word math and we see in the figure 3 that the first level contains information of the partial image of an abacus. Why an abacus? The abacus is considered the oldest instrument of calculation. This image may be submitted directly to the primary count: add, subtract, multiply, which recalls the origin of the mathematical calculus. The selection of the abacus has been a product of the relationship we have made of the various words after the analysis, as stated in point 4 of the methodology. We have been able to find another graphical resource, of course. However, as the first significant level, we are sure that the image of the abacus provides at least all the readings above, and offers other meanings to the visual proposal of trigonometry. So far we have obtained the graphic object of the first significant level. But with the selection of the image is not enough. As we explained above, reading and writing of digital images also depend on the spatial configuration translated in the positioning, scale, texture, color, and especially the relationship of the graphic object with the other elements and with the same screen. Consider each one of these issues allows us to structure the formal meaning that takes the image in each level. To write the image that talks us about trigonometry, we must analyze and get as many levels as keywords we have selected from the definition. Each level will have its independent meaning that does not lose when the entire screen compiles all its elements. The digital display is then the resulting complex image that proposes new readings on the topic of trigonometry.
4.
Conclusions
We thought briefly about the importance of writing and reading of digital images on the screens instructional. The presented examples have allowed us to understand that just as there has been an evolution in the digital display from technological development, also the image has changed its formal codification to adapt to the various digital media. That is why we started our thinking to believe that our readings of the digital image may be insufficient if only considered them as a simple collection of objects on the screen or the mirror. As described above, this could well throw incomplete and erroneous readings leading to deterioration in the message of any instructional materials. The digital image must be written and read from new perspectives, so we propose a methodology for writing the digital image: Thirteen steps to obtain the configuration of any visual theme and focus the instructional delivery of content from any new teaching positions. This research is part of experiences and reflections that have been articulated in the development of educational materials. This is only the beginning. We wish that other future works allow us to expand the picture to systematize the process of writing and reading of digital images
5.
References
Aparici, R., García-Matilla, A. (1987). Lectura de imágenes. Madrid: Ediciones de la Torre. Bolter, J. D., Grusin, R. (1999). Remediation: Understanding New Media. Cambridge: MIT Press. Busquets, Ll. (1977). Para leer la imagen. Madrid: Publicaciones ICCE. Catalá, J.M (2005). La imagen compleja: La fenomenología de las imágenes en la era de la cultura visual. Universitat Autónoma de Barcelona: Barcelona. Clark, R.C., Lyons, Ch. (2004). Graphics for Learning. Pfeiffer: San Francisco. Costa, J. (1998). La esquemática. Visualizar la información. Barcelona: Paidós. Costa, J., Moles, A. (1991). Imagen y didáctica. Barcelona: Enciclopedia de Diseño. Rodríguez Diéguez, JL. (1978). Funciones de la imagen en la enseñanza. Barcelona: Editorial Gustavo Gili, S.A.
40
VIRTUAL REALITY INTERNET RETAILING (VRIR): EXPERIMENTAL INVESTIGATION OF INTERACTIVE SHOPPING INTERFACE – STORE ATMOSPHERE EFFECTS ON USERCONSUMER BEHAVIOR Ioannis G. Krasonikolakis krasos@aueb.gr
Adam P. Vrechopoulos avrehop@aueb.gr
ELTRUN – The E-Business Center Department of Management Science and Technology Athens University of Economics and Business 76 Patission Str., 104 34 Athens, Greece Tel: +30-210-8203731, Fax: +30-210-8203730 Abstract While store atmosphere constitutes an important store selection criterion both in conventional and electronic retailing, corresponding research in the context of virtual reality retailing (VRR) through the Internet is on its infancy. Similarly, while the web based retail stores’ interactive shopping interface effects on consumer behavior have been sufficiently investigated until today, relevant research in the context of virtual reality (e.g. Second Life) is generally lacking. The present study explores VRR through the Internet, giving particular emphasis on the underlying literature and on the resulting key research questions that should be addressed. Furthermore, it investigates VRR store atmosphere (VRRSA) determinants and its potential effects on consumer behavior. The initial findings imply that VRR possesses unique, as well as similar to other retailing channels (e.g. conventional and “traditional” Web) characteristics and functionalities. Finally, the paper sets the future research agenda and highlights emerging research and managerial challenges.
Virtual Reality Dynamics and Indicative Studies According to the Economist (2006), Second Life counted $1bn USD in 2005, and will grow to over $7bn USD by 2009 managing over $400,000 virtual currency transactions per day, supporting more than 7,000 profitable businesses. Similarly, O’Reilly (2006), reports that many people today interact within Virtual Retailing Environments (VREs) for real economic purposes apart from conventional and Internet retailing. However, current consumers are increasingly expect more entertaining experiences and not just a simple process to purchase goods and services through VRR (Pine and Gilmore, 1999; Postrel, 2003). This implies a growing importance of web site features that not only facilitate the purchase decision process, but also provide an enjoyable shopping experience and certain emotions’ arousal. The following table summarizes a list of relevant conceptual and empirical research attempts in the context of online store atmosphere-shopping graphical user interface and its effects on online consumer behavior.
41
Authors Burke (1996) Dailey (1999) Klein (2003); Then and DeLong (1999) Pine and Gilmore (1999); Postrel (2003) Babin et al. (1994); Childers et al. (2001) Eroglu et al. (2000) Vrechopoulos, O’ Keefe and Doukidis (2000) Internet Retailer (2005) Hess (2005); Lohse et al. (2000) Constantinides (2004) Mazursky & Vinitzky (2005) Papadopoulou (2007)
Findings Stated that 3D effectiveness in e-commerce lies in its ability to generate a virtual environment for the end-user in which his/her experiences will affect patronizing the physical environment First discussed about web atmospherics Reported that product presentation features, positively influence consumer responses, such as attitude towards, willingness to purchase from, and willingness to return to the online store. Current consumers increasingly expect engaging experiences and not just a process to purchase goods and services Hedonic and utilitarian factors influence consumers’ patronage intention Online retailers could provide an atmosphere via their website which can affect shoppers’ image and experience with the online store. Introduced the term Virtual Store Atmosphere. Product presentation features result in shoppers’ staying longer and spending more by piquing curiosity about products and by offering experiential value Retailers continue to offer improved web site features to enhance the shopping experience, differentiate between shopping sites, and further increase online sales Web site design factors, such as layout and product presentation, have the potential to engage consumers in unique and enjoyable experiences Highly vivid interfaces such as 3D virtual stores provide motives, emotions, meanings and communication which are represented objectively. the use of virtual reality for online shopping environments provides an superior customer experience in comparison with conventional web stores
Table 1: Indicative Relevant Conceptual and Empirical Research Key research questions and Expected Contribution The main two key research questions that arose through the literature review are: • Which are the components of the Virtual Reality Retailing Store Atmosphere (VRRSA)? • Do the VRRSA components affect consumer behavior? • How the VRRSA components affect consumer behavior? In sum, the following research hypothesis could be formulated: H1: Virtual Reality Retailing Store Atmosphere (VRSSA) affects consumer behavior The present study aims to define the VRRSA determinants and develop a corresponding components’ framework. It also aims to provide evidence regarding causal relationships between VRRSA components and consumer behavior. In terms of managerial implications, it aims to provided design guidelines towards implementing effective VRR shopping environments (i.e. interfaces) that meet customer needs and preferences.
References: Babin, B.J., Darden, W.R. and Griffin, M. (1994). Work and/or fun: measuring hedonic and utilitarian shopping value. Journal of Consumer Research, 20 (4), 644-56. Boyd, G., & Moersfelder, M. (2007). Global business in the metaverse: money laundering and securities fraud. The SciTech Lawyer 3 (3), 4-7. Burke, R. R. (1996). Virtual shopping: breakthrough in marketing research. Harvard Business Review, 74 (2), 120– 31. Childers, T.L., Carr, C.L., Peck, J. and Carson, S. (2001). Hedonic and utilitarian motivations for online retail shopping behaviour. Journal of Retailing, 77 (4), 511-35.
42
Constantinides, E. (2004). Influencing the online consumer’s behavior: the web experience. Internet Research, 14 (2), 111-26. Dailey, L.C. (1999). Designing the World We Surf In: A Conceptual Model of Web Atmospherics. Chicago: American Marketing Association. Eroglu, S.A., Machleit, K.A., & Davis, L.M. (2000). Online Retail Atmospherics: Empirical Test of a Cue Typology. In J.R. Evans & B. Berman (Eds), Retailing 2000: launching the new millennium. Proceedings of the Sixth Triennial National Retailing Conference presented by the Academy of Marketing Science and the American Collegiate Retailing Association. 144-150. Hess, D. (2005). Evolution: online retailing and the ascent of the precision shopping machine. Internet Retailer, August, available at: www.internetretailer.com/article.asp?id=15638 Internet Retailer (2005). Antropologie uses rich media to build customer relationships. June 9, available at: www.internetretailer.com/dailynews.asp?id = 15194 (accessed March 12, 2007). Klein, L.J. (2003). Creating virtual product experiences: the role of telepresence. Journal of Interactive Marketing, 17 (1), 41-55. Lohse, G.L., Bellman, S. and Johnson, E.J. (2000). Consumer buying behavior on the internet: findings from panel data. Journal of Interactive Marketing, 14 (1), 15-29. Mazursky, D., & Vinitzky, G. (2005). Modifying consumer search processes in enhanced on-line interfaces. Journal of Business Research, 58, 1299 – 1309. O’Reilly, T. (2006). What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. O’Reilly website, 30th September 2005. O’Reilly Media Inc. Available at: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-Web-20.html Papadopoulou, P., (2007). Applying virtual reality for trust-building ecommerce environments. Virtual Reality 11 (2), 107–127. Pine, B.J. II and Gilmore, J.H. (1999). The Experience Economy, Harvard Business School Press, Boston, MA. Postrel, V. (2003). The Substance of Style. HarperCollins, New York. The Economist (2006). Living a Second Life: Virtual economy. The Economist, 28 April, 2006. Then, N.K. and DeLong, M.R. (1999), “Apparel shopping on the web. Journal of Family and Consumer Sciences, 91 (3), 55-68.
43
BASIC ANALYSIS FOR USAGE OF SOCIAL BOOKMARKING SERVICES: DISTINCT PLATFORM AS A TOOL FOR INFORMATION MANAGEMENT Yoshiaki FUKAMI Graduate School of Media and Governance, Keio University yofukami@sfc.keio.ac.jp Hideaki TAKEDA Ikki OHMUKAI National Institute of Informatics {takeda | i2k}@nii.ac.jp Jiro KOKURYO Faculty of Policy Management, Keio University jiro@sfc.keio.ac.jp Abstract While most social computing platforms like blogs and photo/video sharing sites are designed to upload and share contents generated by users themselves, social bookmarking services (SBM) is a tool to store website information for further reference. The information itself is not newly generated but rather scooped off the web by users. In this paper, we provide a basic analysis of how users in fact utilize SBM. Data gathered from a quantitative survey shows that SBM is used as tool for information management akin to the bookmarking function of browsers installed on local computers. Most users make use of stored data generated by others. Now, in order to effectively utilize the functions of such platforms as SBM and advance from activities reflecting self-interest to the actual creation of commons, we need to research the specific design of these platforms.
1. INTRODUCTION Among the array of current social computing platforms (Parameswaran and Whinston, 2007), Social bookmarking services (SBM) occupies a unique position. While blogs and photo/video-sharing sites are designed to upload and share contents generated by users themselves, SBM is a tool that allows users to store website information on the Internet. There have been many studies dealing with what motivates users and makes them participate in user communities on various social computing platforms (e.g. Boyd and Ellison, 2006; Nov and Ye, 2008; Beenen et al., 2004). However, most of these platforms have been designed with a view to support communication among users. While contents on most social computing services are created by users, SBM provides a means whereby users can store collections of website links to web pages which they wish to remember and share. As users can thus share pointers to various categories of websites, SBM is regarded not only as residing in the communications domain, but also as a knowledge sharing platform per se.
2. RESEARCH METHOD To determine the extent to which SBM is put to use, we conducted a quantitative survey of users of the Japanese social network service (SNS) Buzzurl2 in May, 2007. We invited ‘active users’ of Buzzurl via email to fill in and return a specific web survey form. ‘Active users’ were defined by the following criteria: • having registered a total of more than ten URLs with more than 10 tags attached • having registered more than one URL in the one month leading up to the survey While 258 users qualified by the these criteria, 78 of them (30.2%) returned the completed web questionnaire. 2
Buzzurl:http://buzzurl.jp/
44
3. RESULTS OF ANALYSIS 3.1. REASONS FOR USING SBM Table 1 shows reasons for using the social bookmarking service. Nearly 90% of subjects answered they use SBM for personal information retrieval. Choices relating to communication with others came to less than 20%. Few users are motivated to share or diffuse contents generated by others. Table 1. Reasons for using SBM % 100 89.7 15.4 19.2 5.1
total retrieving personal information sharing information with friends and relatives attracting other users with posted URLs other reasons
N 78 70 12 15 4
3.2. REASONS FOR ATTACHING TAGS Table 2 shows reasons for attaching tags. Nearly 70% of the subjects stated as their reason “to classify and order URLs” or “to make it easier to search the sites later”. While individuals themselves generate tags for information management, the intention of attaching tags is not primarily related to communication among users. Table 2. Reasons for attaching tags
% N total 100 78 classifying and ordering URLs 71.8 56 making it easier to search same sites later 69.2 54 keeping site evaluations 33.3 26 maintaining tasks or plans related to sites 15.4 12 leaving messages for authors of sites 14.1 11 other purposes 0.0 0 These results indicate that SBM may be regarded as a tool for information retrieval. Most users appear to be motivated by their own personal utility, using SBM mostly as a tool for information management and not for communication purposes in the sense of other social computing platforms. Table 3 provides answers to the question of how information generated by other users is in fact put to use. We find that 80% of the subjects make use of accumulated information on SBM in a variety of ways, while the remaining 20% never utilize information registered by others. In other words, most users make mutual use of information generated by others. Table 3. Uses of information generated by others % 100 35.9 23.1 33.3 28.2 33.3 28.2 24.4 11.5 0.0 19.2
total accessing popular bookmark lists accessing whole user tag clouds searching URLs searching tags attaching tags of other users to own sites attaching comments of other users to own sites accessing URL lists with postings similar to own browsing URL lists of others others never referring to URLs and annotations of others
N 78 28 18 26 22 26 22 19 9 0 15
4. CONCLUSION The data gathered in the quantitative survey of users of Buzzurl allows us to make certain assumptions as to the motivation and attitudes of the subjects toward SBM. •
Users treat SBM as a tool for personal information retrieval and management. While some users are interested in the responses of other users to existing URL lists, SBM is
45
primarily regarded as a repository of information gathered on the web. This aspect makes SBM different from other social computing platforms. •
Few users communicate with others on SBM, which works as a tool for individuals. Even though some users are interested in the activities of others, the majority utilizes the annotating function to increase the efficiency of managing their own information. So SBM users rarely tend to accumulate URL lists and annotations for others. It can therefore be said that the utilization of SBM is not directly related to actual communication.
•
Users make use of metadata generated by others. Nearly 80% of subjects using SBM answered that they refer to annotations and URL lists posted by others. In short, data accumulated on SBM helps individuals with information retrieval and knowledge management.
Our survey shows that SBM is regarded more as a tool than a communications platform. Even though the function of SBM is to rationalize management of one’s own personal information, many users benefit from information made available thanks to the activities of others. In this regard SBM differs from other social computing platforms such as SNS and photo sharing sites. The particular features of SBM may open up distinctive mechanisms of diffusing benefits among users. The analysis of what exactly triggers specific patterns of use and benefit on SBM while building loyalty to such platforms ought to reveal important elements of architecture where users come to form communities and collaborate to build commons.
REFERENCES Beenen, G., Ling, K., Wang, X., Chang, K., Frankowski, D., Resnick, P., and Kraut, R. (2004). Using Social Psychology to Motivate Contributions to Online Communities. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work, p. 212–221. Boyd, D. M. and Ellison, N. B. (2007). Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication, 13(1):210–230. Nov, O. and Ye, C. (2008). Community Photo Sharing: Motivational and Structural Antecedents. In Proceedings of the 29th International Conference on Information Systems (ICIS2008), page Paper 91. http://aisel.aisnet.org/icis2008/91. Parameswaran, M. and Whinston, A. (2007). Social Computing: An Overview. Communications of the Association for Information Systems, 19:762–780.
46
DESIGNING ONLINE EDUCATIONAL GAMES FOR THE NE(X)T GENERATION 1
Sofia Mysirlaki , Fotini Paraskeva 1
1
University of Piraeus, Department of Technology Education and Digital Systems, 80, Karaoli & Dimitriou Str., 185 34 Piraeus, Greece 1 smyrsila@unipi.gr, 1fparaske@unipi.gr,
Extended Abstract It is common ground that 21st century people require a different set of skills in order to cope with the complexity and the faster pace of life, than people in the old days did. These are known as “the skills for 21st century”, or “next generation skills” and they are all fundamental to the success of knowledge workers (Galarneau & Zibit, 2006). According to that, Dede (2000), has identified three specific abilities that are of growing importance: • Collaborate with diverse teams of people—face-to-face or at a distance—to accomplish a task. • Create, share, and master knowledge by assessing and filtering quasi-accurate Information. • Thrive on chaos, that is, be able to make rapid decisions based on incomplete information in order to resolve novel dilemmas and having the “ability to learn from unforeseen situations and circumstances.“ (Canto-Sperber & Dupuy, 2001). “Ne(x)t-generation”, is a term that is used to describe the people that grew up with ICT, having a whole different set of needs and skills than older people had. This term stems from the term “Net Generation”, coined by Don Tapscott (1998), used to describe the generation that grew up immersed in a digital--and Internet--driven world. Since then, different terms have been used to describe this group, such as “digital natives” (Prensky, 2001), “millennials” (Howe & Strauss, 2000) or Google generation (JISC, 2008). This new generation of people, such as MMOGs users, through access to the Internet have amassed thousands of hours of rapidly analyzing new situations, interacting with characters they don’t really know, and solving problems quickly and independently (Beck & Wade, 2004). We argue that educational environments should aim at developing these needs in order to educate 21 century skilled people. Nevertheless, it is striking that many people today are not acquiring these 21st century skills through structured learning environments that anticipate these needs, but rather through various “cognitively-demanding leisure” activities they choose to engage with, including to a larger and larger extent, videogames (Johnson, 2005), and virtual worlds. On the other hand, it is claimed that educational games do not receive the expected acceptance from the students, mainly because the “fun” factor appears to be absent from these applications (Leddo, 1996). Most of them focus on individual offline gaming, and they do not present resemblances with the games that students use in their free time (such as MMOGs). The behaviouristic educational games (drill-and-practice games) allow the interaction of student with his personal computer, isolating the learner and not allowing interaction with other students. Moreover, the games that are based on a cognitive approach (simulations) allow the interaction with the classroom; those that are based on constructivism allow the interaction with the constructed world (microworld), while those that are based on a social-cultural approach (open-ended environments) allow the interaction among groups. At the same time, the behaviouristic educational games are applied in the context of a computer, the cognitive educational games in a classroom; the games based on constructivism are limited in the context of a manufactured world, while the social - cultural games are focused in the wider frame of an online community. Therefore, we observe an enlargement of the educational context of educational games (Figure 1).
47
Online Communities Constructed Environment Classroom
So
Co ns t
s ld or
ru ct
w
ns io at ul
iv ci is o m Th -cu eo lt u ry ra l
dan e ll- ic ri ct D ra P
m Si
d ro ic d e ts M en en n nm pe iro v en O
B eh av io Co ri sm gn th i eo tiv ry e
PC
Figure 1. Educational Context of Digital Educational Games
Thus, we underline the need of designing educational games as open environments of learning (Open-ended environments), by using elements of online commercial games that seem to interest students, such as MMOGs. However, most educational games are based on behaviouristic models, rather than on social-cultural perspectives, that focus on the creation of communities. On the contrary, the commercial games and particular MMOGs, are based on the strength of their online communities and seem to foster next generation skills, such as: - Collaboration with different teams of people. - Creation, sharing, and the conquest of knowledge through evaluation and infiltration of information. - Thriving in chaos (Dede, 2000). - Critical thinking and solving problems (Burkhardt et.al., 2003, p. 49). Based on the this approach, we argue that MMOGs can be an excellent case study in our attempt to understand what features should we include in the design process of an educational game. Thus, we currently study the factors of engagement (Brockmyer, J.H. et al 2009) and the sense of belonging in a group (McMillan & Chavis, 1986, p. 9), as motivating factors (Malone & Lepper, 1987) of MMOGs, and especially of WoW (World of Warcraft), in order to understand good game design principles that could be used in the design of good Educational Online Games. Moreover, we develop educational scenarios of well designed educational activities that would aim at developing 21st century skills, exploiting the interaction that these games provide, in order to foster collaboration and develop high order thinking (HOT) (Bloom et al., 1965), social, and affective skills for the ne(x)t generation.
References Beck, J & Wade, M 2004. Got game: how the gamer generation is reshaping business forever, MA: Harvard Business School Press, Boston,. Bloom, B.S., Englhart, M.D., Furst, E.J., Hill, W.H., & Krathwohl, D.R. (Eds.) (1956) Taxonomy of educational objectives, the classification of educational goals, Handbook I: cognitive domain. New York: Longmans. Brockmyer, J.H., Fox, C.M., Curtiss, K.A., McBroom, E., Burkhart, K.M., Pidruzny, J.N. 2009. The Development of the Game Engagement Questionnaire: A Measure of Engagement in Video Game-Playing, Journal of Experimental Social Psychology. Burkhardt, G., Monsour, M, Valdez, G., Gunn, C., Dawson, M., Lemke, C., Coughlin, E., Thadani, V, Martin, C. 2003, enGauge 21st century skills: North Central Regional Educational Laboratory and the Metiri Group.
48
Canto-Sperber, M & Dupuy, J. P 2001, Competences for the good life and the good society. In D. S. Rychen, & L. H. Salganik (Eds.), Defining and selecting key competences (pp. 67–92). Seattle, Toronto, Bern, Göttingen: Hogrefe and Huber Publishers. Dede, C 2000. A new century demands new ways of learning An excerpt from The Digital Classroom. In D. T. E. Gordon (Ed.), The Digital Classroom. Cambridge: Harvard Education Letter. Galarneau, L & Zibit M 2006, Online games for 21st century skills. In Games and simulations in online learning: Research and development frameworks. David Gibson, Clark Aldrich and Marc Prensky (Eds.) Howe, N. & Strauss, W, 2000, Millennials Rising : The Next Great Generation. New York: Vintage Books. JISC 2008, Google Generation. Retrieved 22 February 2008 from http://www.jisc.ac.uk/whatwedo/programmes/resourcediscovery/googlegen.aspx Johnson, S 2005, Everything bad is good for you, Touchstone, New York. Leddo, J., 1996. An intelligent tutoring game to teach scientific reasoning. Journal of Instruction Delivery Systems, 10(4), 22–25. Malone, T. W., & Lepper, M. R. (1987). Making Learning Fun: A Taxonomy of Intrinsic Motivations for Learning. In R. E. Snow & M. J. Farr (Eds.), Aptitute, Learning and Instruction: III. Conative and affective process analyses (pp. 223-253). Hilsdale, NJ: Erlbaum. McMillan, D. W., & Chavis, D. M. 1986. Sense of community: A definition and theory. Journal of Community Psychology, 14 (1), 6-23. Prensky, M, 2001, Digital Natives, Digital Immigrants, Part II: Do They Really Think Differently? In: On the Horizon, 9(6). Tapscott, D, 1998, Growing Up Digital. The Rise of the Net Generation. New York: McGraw Hill.
49
DESIGNING ONLINE EDUCATIONAL GAMES FOR THE NE(X)T GENERATION Sofia Mysirlaki, Fotini Paraskeva Current Research
Proposition
Method: Quantitative analysis
The design of educational games based on MMOGs principles, such as increasing levels of achievement and collaboration and teaming with others, in order develop high order thinking (HOT) (Bloom et al., 1965), social, and affective skills for the ne(x)t generation.
Instrument: Developed Questionnaire based on: •
Intrinsic Motivation Factors Malone and Lepper (1987)
•
Game engagement questionnaire (Brockmyer, J.H. et al 2009)
• Sense of belonging in a group questionnaire (McMillan & Chavis,1986)
Research Questions
Educational Games Moving from the behavioural educational games to more socio-cultural perspectives, educational games need to be designed as Open-ended Environments, where students can interact with social groups. Online Communities
What are the important factors in players’ enjoyment of MMOGs, such as WoW (World of Warcraft)? Sub questions: •
What does motivate MMOGs’ players?
• Is group participation an important factor in achieving higher levels? • Is group persistence a critical factor in game engagement?
Constructed Environment Classroom
l
Th -cu eo lt ry ura
Co ns tr uc So ti vi ci sm o
O
w d ro ic de ts M en en n nm pe iro v en
ns io at ul s m ld Si or
dan e ll- ic ri c t D ra P
Be ha vi or Co is g m th ni t eo iv ry e
PC
MMOGs can be an excellent field of research, concerning the features that we should include in the design process of educational games that would enhance next generation skills.
Ne(x)t Generation skills • Collaborate with diverse teams of people—face-to-face or at a distance—to accomplish a task. • Create, share, and master knowledge by assessing and filtering quasi-accurate Information. • Thrive on chaos, that is, be able to make rapid decisions based on incomplete information in order to resolve novel dilemmas and having the “ability to learn from unforeseen situations and circumstances.“ (Canto-Sperber & Dupuy, 2001). Many people today are not acquiring these 21st century skills through structured learning environments that anticipate these needs, but rather through various “cognitively-demanding leisure” activities they choose to engage with, including to a larger and larger extent, videogames (Johnson, 2005), and virtual worlds.
Email: missofia1@yahoo.gr University of Piraeus, Department of Technology Education and Digital Systems, 80, Karaoli & Dimitriou Str., 185 34 Piraeus, Greece
NEW FORMS OF ELECTRONIC GOVERNANCE IN THE SOCIAL WEB: THE GREEK CASE. Eleni-Revekka Staiou Department of Communication and Mass Media, University of Athens, Laboratory for New Technologies in Communication, Education and the Mass Media erstaiou@media.uoa.gr
The purpose of this research is to verify the relationship between the social web and the electronic governance and also the investigation of the possibility to emerge new forms of electronic governance in the social web, especially in Greece. The research will focus on the influence of the social web on electronic governance. Is this influence positive or negative? Does this new form of web enrich electronic governance, including more possibilities and people? Or can it become dangerous for the citizens and the state, if the control goes out of the experts’ hands? The Greek case is very interesting because although there is a crisis of the institutions in Greek society and people are searching for new ways to be self-organized, they are not looking for help in the social web, even though they are using it a lot in their personal life. In the study, we are going to refer to social networks, in general as a way for people to organize themselves in order to fight for their rights or just for fun. Also we are doing a literature review of the social web and electronic governance, especially in Greece with statistics and examples. During the last years a lot of websites came up with the purpose to “wake up” the citizens and give them the opportunity to express their opinion without the need to have access to the old media to do it. In the old media, only a minority has access to, and as a result only a certain point of view is on air. With the social web, this thing is about to change. Still, there are the old media, but now there is the alternative option. Some examples of these websites are following. The “Pin Project” (http://www.msfree.gr/pin/): The Pin Project is a project started from and for the citizens, but also very useful for the state as well. There is a map on the site, and everyone who wants, can go and put an electronic pin on a spot of the virtual road, where the real road has problems. So other people can avoid this road and maybe an accident can be prevented and the state can find easier places with damages in order to fix them. The website http://www.edopolytexneio.org : This website has the form of a wiki, and everyone, with a simple account is able to go and write, correct, even delete anything on it. This wiki was created by students during the crisis in the Greek higher education in order to inform the students for the latest news and to help in organizing the actions. The website ActClick (http://www.actclick.com): It is a Greek social network which was created in order to become a central point of meeting for all the Non-Governmental Organizations (NGO) of Greece and the whole world. It offers linking with other social websites and space for advertising and the money from them go to the NGO. The website Patient Opinion (http://www. patientopinion.org.uk): This website is from United Kingdom and it is a great example of what the citizens can do when they are able to organize themselves. In Patient Opinion, all the citizens can write comments about the services they get in National Health System (NHS) of England. They comment on hospitals, doctors, facilities and they offer very important information in a such a serious topic. In the research, apart from the literature review and the field study, examples like those above will be studied in depth and experiments with websites like those will be attempted in several social networks like universities, online communities, smaller towns or even villages etc.
51
PRESENTATION TITLE: “E- GOVERMENMENT PLATFORMS IN SECOND LIFE” William Prensky Future Work Institute Inc. New York, New York, USA
Evangelos Syrigos Magister Artium Group Athens, Attiki, Greece
There are 5 main approaches to e- government platforms • Traditional Web-Based (Web 1.0) Platform with Simple Text/Graphics • Advanced Web-Based (Web 1.0) Platform with Text/Graphics/Video/Social Network features • Virtual Web-Based (Web 1.0) Platform with Real-time /Conferences/ Networking/Exhibits features • Virtual 3-D Closed System (Web 2.0) Platform which creates a close virtual 3D Immersive Environment-(Example ProtoSphere) • Virtual 3-D World Open System (Web 2.0) Platform which creates an open virtual 3D Immersive • Environment- FutureWork Island in Second Life
First, we will present you which are the main advantages of Second Life (SL) Platform. These advantages are presented in Table 1.
Table 1: SL advantages for E- Learning and E-Government Applications Allows Plus In-world Creation
Content created by sing the tools provided within the SL Grid
Many classes available to teach the building and scripting techniques
Built and Owned by Residents
Residents retain rights to their digital creations
A massive existing base of end users and user-created content that can be leveraged in your own virtual world offering
Social Networking Communities
Identifies and links in people with common interests • Join “groups” and meet regularly • Fosters collaboration within company
Book clubs, concerts, learning communities, new media communities, developer communities, etc.
52
Second, we present you below some photos of how an E-learning or E-Government platform works in SL.
Amphitheatre for Large Group Meetings
Classroom Exercise on Gender Communications 34
Library and Media Lab
37
World CafĂŠ for Small Group Meetings 40
36
Welcome to 3D Virtual Worlds!
53
VIRTUALITY: A FERTILE AND PROSPEROUS RESEARCH FIELD Niki Panteli University of Bath School of Management Bath, BA2 7AY, UK N.Panteli@bath.ac.uk
Abstract Virtual Social Networks have recently gained an overwhelming attention among both academics and practitioners. In this paper, we aim to position research on virtual social networks within the wider virtuality literature and show how this will benefit from earlier research on virtuality. A four-level model is developed for this purpose showing the different strands of research in the area of virtuality. It is argued, using existing research studies as illustrations, that each level is being informed by and informs the others and thus, all levels collectively are vital in our understanding of virtuality. The paper concludes with the identification of emerging research themes in the area of virtual social networks in particular and virtuality in general.
1. INTRODUCTION Virtuality has been undergoing rapid and fundamental changes. As technology changes so too have its applications and our uses and experiences with them have changed as well. The emergence of new technologies such as Web 2.0 technologies offers individuals opportunities for new ways of interacting, playing, working and learning, and for companies new ways for promoting and advertising their products and services and interacting with their customers in general. The new possibilities are exciting, but there are uncertainties and anxieties too which affect individuals and organizations as well as societies in general. It is within this context, one of simultaneous excitement and anxiety, that this paper aims to discuss Virtual Social Networks within the context of virtuality research. The current public discourse refers to social networking sites and online games showing the popularity and fascination that has developed on these emerging, multiplayer and interactive virtual networks. However, though there has been an exponential interest in virtual social networks within the media, the business and academic communities, our understanding of these networks remains very limited. In this paper, the primary aim is to position the current enthusiasm on and research interest in virtual social networks within the wider virtuality literature and to identify the inter-dependencies between the different streams of research in the field. There is currently an overwhelming attention on virtuality and the dramatic effect it is having on an increasingly vast number of people, organizations, communities and societies. For the first time, contemporary societies see the massive attachment of individuals, regardless of location, language, education, gender and age, and their involvement in online social spaces. These current trends have taken research in virtuality to a new level, needless to say that the speed of this change has taken many of us by surprise. We discuss these trends in virtuality research in the section that follows.
2. LEVELS OF VIRTUALITY RESEARCH It has been posited that with virtuality individual choices are expanded to include numerous possibilities, unrestricted of local practices, with significant effects within and beyond organizations (Panteli and Chiasson , 2008). The literature guides us to the identification of different categories of research on virtuality. In this section, we position these in terms of a hierarchy and therefore call them ‘levels of virtuality research’. These levels do not demonstrate stages of growth of virtuality, but rather they are distinct levels of virtuality research
54
that have a presence and sustainability of their own. Some researchers may choose to relate themselves with one of these levels and develop an expertise in one category of virtuality research (e.g. virtual teams), but others may prefer to follow the growth of virtuality and study certain topics at different virtuality levels. Figure 1 shows this representation of the four levels along with the two basic dimensions. The first dimension refers to the degree of technology availability that the research takes account of. For example, a study may concentrate on exploring the interactions of a specific type of technology in a virtual environment (e.g. use of email in conflict escalation) or it may study an environment where several technologies (e.g. email, instant messaging, videoconferencing, chat rooms) may be available to a certain group of users. Thus, technology availability within single studies may either be limited or widespread. The second dimension is the nature of interaction that is experienced among users in the specific virtual environment under consideration. In this way, interaction with the technology could be a single-player one or a multi-player one. Figure 1: Levels of Virtuality Research
Multiplayer
LEVELS OF VIRTUALITY RESEARCH
User Interactions
Virtual Social Networks
Online Communities Beyond
Virtual Teams & Organizations SinglePlayer
Organizations
Within
CMC Limited
Technology Availability
Widespread
While Levels 1 and 2 include research that primarily takes place within an organizational context at both an intra and inter-organizational levels, levels 3 and 4 go beyond the organization to explore wider virtual spaces, communities and networks in the broader sense. 2.1. Level 1: Computer-mediated Communication Level 1 of virtuality research takes a focus on specific computer-mediated communication (CMC) technologies and their use and/or impact on single-players. CMC is communication that takes place between human beings via the instrumentality of computer (Herring, 2004). It has also been defined as any human symbolic text-based interaction conducted or facilitated through digitally-based technologies (Spitzberg, 2006). This definition includes a variety of technologies such as the Internet; mobile phone text; instant messaging (IM); multi-user interaction (MUDs & MOOs); email and listserv interactions; and textsupplemented videoconferencing (e.g. decision support systems). It requires people to be engaged in a process of message interchange in which the medium of exchange at some point is computerised. The impact of these studies has been vital in developing an appreciation of the social construction of communication media. Over the years, this knowledge has been used, adapted and enhanced in studies on other types of communication media that includes videoconferencing, instant messaging and more recently blogging where the role of the user within the mediated environment is being investigated.
55
2.2 Level 2: Virtual Teams and Virtual Organizations Level 2 is concerned not just about the use of different technologies in enabling virtual interactions, but rather looks into how individuals, groups and organizations could work collaboratively in a virtual environment where communication is primarily computer-mediated. This level therefore takes a focus on group interactions; assumes that in a virtual environment there are various players/members who need to get together and work on a specific project. From the wide variety of definitions of virtual teams, perhaps the most quoted is that of Lipnack and Stamps (1997): “a group of people who interact through interdependent tasks, guided by a common purpose… with links strengthened by webs of communication technologies”. In a globally distributed environment collaboration between dispersed (virtual) teams / team members rely heavily on communication and groupware technology to achieve common outputs (Majchrzak et al., 2000; Panyasorn, Panteli, & Powell, 2008). Studies at this level increase our understanding of the challenge involved in building and sustaining effective virtual collaborations at both the group and organizational levels. Further, such studies warrant the further exploration of how soft human factors can be integrated into better understanding behaviour in virtual work arrangements and ultimately developing strategies for managing individuals, teams and organizations virtually. 2.3 Level 3: Online Communities Level 3 is Online Communities, concerned with the formation, development and sustainability of communities within the virtual environment. Fernback & Thompson (1995, p. 8) provides the definition of on-line community as “social relationships forged in cyberspace through repeated contact within a specified boundary or place that is symbolically delineated by topic of interest”. The words online and virtual have similar meaning and have been are used interchangeably in the literature. Studies at this level take our understanding of virtuality further and into the wider ‘space’ and give insights into the development of communities beyond organizations. Nevertheless, organizations can still benefit from these insights as they show the factors that draw individuals into unknown virtual spaces and how they manage to develop social as well as professional relationships using computer-mediated communication (Guru and Keng, 2008). Moreover, by exploring the opportunities and challenges that virtual communities provide for self-presentation and impression formation in human communication, we are able to develop richer views of the different context and audiences for identity performance (Merchant, 2006). 2.4 Level 4: Virtual Social Networks Finally, Level 4 is Virtual Social Networks, concerned with the sudden but rapid emergence of massively, multiplayer virtual sites. Virtual social networks – networks used for creating and maintaining social interactions among geographically, globally dispersed individuals – have become an important way for interacting in today’s contemporary world. The emergence of web 2.0, user-orientated, technologies allow massive and mediated networking opportunities to develop online. Miller (2005) for example refers to the enhanced opportunities for user participation as opposed to the one-sideness of web 1.0 where content simply flowed from provider to viewer. Accordingly, at this level of virtuality research, a massive number of users interact using a widespread availability of different technological tools and capabilities. It is not therefore surprising that in the current age of global and digital economy and virtuality, there has been an overwhelming interest in online social networks. Bringing people together in this way calls for a change to traditional social interactions, particularly where multiple players are involved. These types of networks show that having presence and being present online matter. When online, people want to be in the same space with other individuals; they like interacting on issues they have a personal interest in and thus enjoy spending time on. There is a sense of fascination of the unknown but also a fascination of being able to extend one’s reach beyond the traditional boundaries of their physical world. Presence also indicates a sense of adventure which emerges from the opportunities to create and enact roles that are different to one’s own identity.
56
3. VIRTUAL SOCIAL NETWORKS: EMERGING THEMES Within only a short period of time, virtuality has rapidly become more prevalent unveiling modern internetbased societies. This is now leading to a significant change to the manner in which everyday life is conducted and how our personal and organisational lives are shaped. Tapscott and William (2006) for example argue that the emergence of web 2.0 technologies along with a new generation of internet users born in the 80’s who are likely to have an active role in creating and editing online content have created a ‘perfect storm’ that is revolutionizing the internet and changing the way business is done. The implications are that mass collaboration changes the rules of the game, enabling cheaper and easier communication for both individuals and organizations alike. These changes will have a definite and long-standing effect on the individuals and other entities involved. For this, we do not consider that virtual social networks will be a passing fad; technologies may change but our dependence on them will remain. Existing research on virtual social networks provide some clues about the role that these online networks play and increasingly play in our life. But many questions still remain. What are the motivations, expectations and preferences of users/players and how do these change over time? Rheingold (1993) posited that though CMC technology offers a new capability of "many to many" communication, the way and extend to which this capability would be used, would depend “on the way we, the first people who are using it, succeed or fail in applying it to our lives”. Based on the increasing number of reports on the uptake of these sites, there is little doubt that indeed we have succeeded in applying such massively mediated and multiplayer capabilities into our lives. We can therefore anticipate that this many to many communication is here to stay as its undertaking and use have been overwhelming. Despite this, the speed of change has made it difficult for academics to follow the pace of growth with a sufficient provision of empirical and conceptual studies that would give us insights into these emerging virtual networks and explanations on their growth, operations and impact. These issues are still largely under-researched. Understanding virtual social networks, especially with the increasing commercialization and comodification that surrounds them (Kreps and Pearson, 2008) disclose issues of privacy and data protection. These issues are in further need for debate and discussion. The language we use to talk about different phenomena matters as it affects the way we think about them and our engagement with them. The same sites we refer to as ‘social networking’ sites (e.g. Facebook) also have different roles; they are companies too (Kreps and Pearson, 2008), with their adopted revenue model being that of ‘advertising’ (Enders et al, 2008). They have directors and employees and like all the other companies, they are profit-making entities. It is not surprising therefore that it has been predicted that by 2010, marketers will spend £1, 860 billion on online advertising and venture capitalists have been quizzing web entrepreneurs about their ‘Facebook strategy’ (Reuters 2007). Promoting these sites as ‘networking’ sites but using them for advertising and profit-making, raises many ethical issues that require attention by researchers and policymakers alike. The existing literature on social networking sites gives an overemphasis on the active agency of individual users; several studies for example discuss the demographic characteristics of users and their behaviour in these online spaces. However, contextual factors also need to be considered. The significance of virtual networks, as Castells (2001) very thoughtfully acknowledged, goes beyond the number of users: “Core economic, social, political, and cultural activities throughout the planet are being structured by and around the Internet, and other computer networks. In fact, exclusion from these networks is one of the most damaging forms of exclusion in our economy and in our culture” (p.3). It follows that social inclusion and social exclusion from virtual social networks is a matter that needs exploration. What are the short-term and long-term implications if members of society passively or actively exclude themselves from these networks. Are they winners or losers? And in what ways? Last but not least, there are different types of virtual social networks. Therefore, these should not be treated as homogenous; some networks may be more collaborative than others creating a strong sense of identity and belongingness. Other networks may be by-passed, or are just used to ‘kill time’. Their heterogeneity should therefore be used as an explanatory factor for participants’ diverse experiences and different degrees of
57
exposure to the virtual world. Overall, however, each of them deserves particular attention by researchers for examining issues of trust, collaboration, identity, power and other relationship dynamics.
CONCLUSIONS The aim of this paper has been to position research on virtual social networks within the wider virtuality literature and to identify new areas of research. The four-level model is based on the nature and characteristics of virtuality research that have evolved in the literature over the recent years. It should not be seen as limiting the scope of virtuality. The model allows us to pay sufficient attention to the different levels of virtuality research and provides coherence and synergy in our treatment of the topic. It is therefore important to emphasise that though there has recently been an overwhelming number of studies on level 4, the other levels still remain important and continue to require attention by researchers. Each level is being informed by and informs the others. Accordingly, all levels collectively are vital in our understanding of virtuality. With new technologies emerging all the time, research is required to explore their uses, challenges and opportunities they bring to our lives. For this, virtuality remains a fertile and prosperous research field.
REFERENCES: Castells, M. (2001), The Internet Galaxy: Reflections on the Internet, Business, and Society, Oxford University Press, Oxford Enders A., Hungenberg H., Denker, H-P and Mauch S. (2008), The long tail of social networking. Revenue models of social networking sites, European Management Journal, 26, 199-211 Fernback, J. E. and Thompson, B. (1995). Virtual Communities: Abort, Retry, Failure?, USA: Rheingold. Guru A. and Keng S. (2008), Developing the IBM iVirtuality Community: isociety, Journal of Database Management, Oct-Dec, 19, 4, pi-xiii Herring, S. C. (2004). “Slouching Toward the Ordinary: Current Trends in Computer-Mediated Communication.� New Media and Society, 6 (1), 26-36. Kreps D and Pearson (2008), Community as Commodity, IFIP WG 9.5 International Conference on Massive Virtual Communities, Lueneberg, Germany, July 1-2, 2008 Lipnack, J. & Stamps, J. (1997). Virtual teams: Reaching Across Space, Time and Organizations with Technology. Wiley, New York. Majchrzak, A., Rice, R. E., King, N., Malhotra, A. & Ba, S. (2000). Technology adaptation: The case of a computer-supported inter-organizational virtual team. MIS Quarterly, 24, 4, 569-600. Merchant, G. (2006), Identity, Social Networks and Online Communication, E-Learning, 3, 2, pp235-244 Miller, P (2005), Web 2.0: http:/www.ariadne.ac.uk/miller
Building
the
New
Library.
Ariadne,
issue
45,
October.
Panteli, N. and Chiasson, M. (2008), Rethinking Virtuality. In In Panteli N. and Chiasson, M. (Eds), Exploring Virtuality within and beyond Organizations: Social, Global and Local Dimensions, Palgrave, Hampshire, UK Panyasorn, J, Panteli, N and Powell, P. (2008), An Interaction Model in Groupware Use for Knowledge Management, Encyclopedia for E-Collaboration, IGI Global, London, pp398-404 Reuters 2007, July 6, VCs Factor in Facebook, Red herring the business on technology.
58
Spitzberg, B. H. (2006). “Preliminary Development of a Model and Measure of Computer-Mediated, Communication (CMC) Competence.� Journal of Computer-Mediated Communication, 11 (2), article 12, http://jcmc.indiana.edu/vol11/issue2/spitzberg.html Tapscott, D and Williams A. (2006), Wikinomics: how mass collaboration changes everything, NY: Portfolio
59
VIRTUAL OVERSIGHT OVER TRANSNATIONAL CAPITALISM XBRL metadata and renewed government control of the global economy Dr David Kreps, Information Systems, Organisations and Society Research Centre, University of Salford, UK; d.g.kreps@salford.ac.uk Renfred Wong, Centre for Business, Organisations and Society, University of Bath, UK; rw241@bath.ac.uk
Abstract As the recent and ongoing crisis in the world financial system shows, unregulated markets are dangerous for the world economy. Through the news media we have all become much more familiar with the virtual world of financial transactions in today’s global economy than ever before. In this paper we address the difference between the ‘strong’ virtual reality of immersive technological systems and the ‘weak’ virtual reality of digital culture in the information age. We examine how what Gramscii might have termed a ‘hegemonic’ shift in the control of the financial markets is unfolding through the mechanism of what Latour would term ‘inscription’ – using the mandatory introduction of XBRL to bring far greater transparency to the accounting of transnational corporations and a greater ability on the part of regulators to know precisely what is going on. 1. VIRTUALITY Michael Heim defined ‘virtual’ as: "A philosophical term meaning 'not actually but just as if'." (Heim 1998). In that sense virtuality is akin to simulation – defined as a sham or counterfeit of the ‘real’ thing (Merriam-Webster). The word ‘virtuality,’ was probably first used in the context of interactive computer systems by Theodore Nelson (coiner of the term ‘hypertext’), back in 1980 (Skagestad 1998), and since that time – especially over the course of the 1990s, and since - the word has been used a great deal. Virtuality on interactive computer systems relies on two new technologies: digital telecommunications, and the graphical user interface (GUI) – be it 2 or 3D. However, it is not just this technological virtuality that is so often talked about in academic and news articles. The more cultural meaning of the term, derived perhaps from 1990s cyberpunk authors such as William Gibson, Michael Marshall Smith, and others, has brought us an understanding of our behaviour in the information age as being engaged in virtual activities almost constantly. Thus cultural virtuality might be characterized as a state of mind peculiar to those for whom various digital virtualities are a day-to-day normality. Heim differentiated between two kinds of virtuality, characterizing one as ‘strong’, the other as ‘weak.’ Strong virtuality Heim described as a “technologically determined version of virtuality, where virtual reality is an emerging field of applied science.” (Horrocks p34) Thus Heim’s strong virtuality refers specifically to technology, and a particular kind of technology. This technological virtuality exhibits the typology of ‘immersion’, ‘interaction’ and information ‘intensity’. Immersion “comes from devices that isolate the senses sufficiently to make a person feel transported to another place”, (Heim p6-7). Interaction refers to a computer’s ability to change a virtual scene, in which the user is immersed, in synchronisation with the user’s own movement and point-of-view – for example when moving around in SecondLife or World of Warcraft the backdrop also moves and changes. Information ‘intensity’ defines the degree to which a virtual world can offer users information about their environment. In the ‘strong’ technological virtuality described by Heim an example of high intensity of information would be robotic telepresence such as surgery-at-a-distance, or “controlling a real robot on Mars within a VR environment which sends a high degree of data from Mars, converted into a high intensity of information for the user.)” (Horrocks p34-35)
60
‘Weak’ virtuality is everything else about the culture of the information age that comprises some kind of ‘as if’, and almost everything – “from automated teller machines which fulfill the function of a bank teller in a virtual (an ‘as if’) mode, to phone sex, e-mail and supposed ‘real-life’ experiences such as window-shopping – has now been dubbed ‘virtual’.” (Horrocks p35) In this paper we will be examining a form of what might be called ‘strong’ or technological virtuality, but which is not as immersive or intense as Heim’s definition would imply, but certainly not as loose as the ‘weak’ definition would imply. Somewhere in between the strong and the weak, perhaps, is the world of code through which the web-based systems that we use daily are programmed, and which underpin and interlink the financial centres of the world.
2. DIGITAL MARKETS, CRASHES, & TRANSPARENCY One area of our society that has enjoyed a great deal of computerisation in the last thirty years, experienced the forces of globalisation, and arguably been an engine of the forces of globalisation in every other area of society, is our financial markets. Capital, as Castells describes it, “is managed around the clock in globally integrated financial markets working in real time for the first time in history: billion dollars worth of transactions take place in seconds in the electronic circuits throughout the globe.” (Castells p102) This has had a profound effect on what is, as a result, a newly global economy, each individual country no longer able to escape the effects of the global financial weather. “Global financial flows have increased dramatically in their volume, in their velocity, in their complexity, and in their connectedness.” (Castells p102) This picture of the financial markets as a “global, electronic, out-of-control money-seeking monster….that can suddenly, without warning, be turned into a stampeding electronic herd of a gigantic size, causing companies to go bust, central banks to quiver, national currencies to devalue, people to lose their jobs and money to mysteriously disappear into thin air,” (Hasseltstrom p69) is one that is very familiar to all of us. Albeit a reasonably accurate caricature, however, this picture tells only half the story - it omits the reality of the “work environment and practices of financial traders, brokers and analysts…set in specific physical, temporal and social contexts… [that are] disconnected from the world outside the financial arena,” (Hasseltstrom p69) and therefore desperately in need of reconnecting to reality. The obscurity and obfuscation of complex financial liability-spreading at the sub-prime root of what has become known as the “Credit Crunch” is a clear example of this disconnection. Although the authors of this paper do not put themselves forward as any kind of experts on international banking and finance, the recent events in the financial sector and their interpretation in the media has made lay financiers of us all, expanding our vocabularies and introducing us to whole new taxonomies of trading behaviour. Toxic lending of 110% of property values to people who were unemployed by subprime mortgage lenders, as seems to have been undertaken in the United States, was foolhardy enough, but the mortgage securities packages which put 8 prime mortgages and 2 toxic sub-prime mortgages into one package and were sold to hedge funds and pension funds as secure investments with promise of a good return serve to underline the disconnection from reality these lenders were suffering from. (BBC 2009) Whilst such a disconnection from reality does not in itself imply virtuality, the virtual world within which global financial organisations have come to operate certainly contributes to the obscurity of their operations. Investment banks, hedge funds, and the like, completely unregulated, and yet dealing in numbers and in volumes of deals as large as the banks, have been, according to some, at the root cause of the current crisis (Fison 2009) and need to be reined in, and subject to the same stringent regulation as the banks who have suffered due to their shady practices. In essence, in the eyes of the Chairman of the UK Financial Services Authority, Lord Turner, it is in no small measure the fact that so little of what has been going on has had to be reported in full to central regulators, that has led to the global economic downturn. (Fison 2009) The answer, of course, is that regulation must be extended to cover non-bank financial institutions, and that transparency must be brought to the formerly dark corners of their practices. The financial markets have indeed been rocked not only by the shady practices that brought the credit crunch, but by such scandals as the collapse of Enron in 2006, following which, Jeffrey Skilling, Enron's former chief executive, “faced 28 counts of fraud, conspiracy, insider trading and lying to auditors for allegedly trying to fool investors into believing Enron was healthy before the firm crashed,” (BBC News 2006). More recently we have seen the Bernard Madoff scandal, with a pyramid scheme that has defrauded the world’s banks of some £33bn, (BBC News 2008) and the
61
collapse in India of Satyam Computer Services whose “balance sheets were riddled with ‘fictitious’ assets and ‘non-existent’ cash,” (Al Jazeera 2009) and whose 53,000 employees may soon need to find alternative employment. The authors of this paper would argue that in fact the technological virtuality of the global financial markets could learn a lesson or two from the roll-out of eGovernment across the world. In parts of the developing world, where a great deal of public sector bureaucracy is subject to corruption, each government form needing a cash incentive to help it through on its way to the next bureaucrat in the chain of command, eGovernment is lauded as a weapon against such corruption, promoting transparency in all transactions between citizen and government, and ensuring that bribery of government officers intervening in the process is reduced to a minimum. “If the right procedures are in place,” Bhatnagar argues, “e-government can make financial or administrative transactions traceable and open to challenge by citizenry,” and indeed a number of “case studies of e-government applications from developing countries report some impact on reducing corruption,” (Bhatnagar 2004 p40). Transparency and regulation clearly go hand in hand, as do web technologies and eGovernment. Across the world eGovernment has adopted the international languages of web coding as the language of transaction, e.g. XHTML, XML and other W3C languages aimed at presenting information on the world wide web. In the struggle for transparency, there is a new coding language that looks set to be wielded as an important weapon in the fight – XBRL, or eXensible Business Reporting Language.
3. XBRL 3.1. FIRSTLY, WHAT IS XBRL? XBRL is essentially about metadata. Metadata is data about data. It is defined as “the means by which the structure and behaviour of data is recorded, controlled, and published across an organization” (Tozer, 1999 pxix), or “data that describes the content, format or attributes of a data record or information resource. It can be used to describe highly structured resources or unstructured information such as text documents. Metadata can be embedded within the information resource or it can be held separately in a database.” (Haynes p8). Unlike other web code languages, XBRL can be traced to the efforts of one person: Charlie Hoffman. Mr Hoffman is currently Director of Industry Solutions-Financial Reporting at UBMatrix, a Silicon Valley based company that specialises in providing XBRL-based information exchange solutions. He started investigating how eXtensible Markup Language (XML) could be used for electronic reporting of financial information in 1998 while working as a Certified Public Accountant (CPA) (AICPA 2006). XML is a simple, flexible text format derived from Standard Generalized Markup Language (SGML), originally designed for large-scale electronic publishing (http://www.w3.org/XML/). SGML is an international standard for the definition of device-independent, system-independent methods of representing texts in electronic form, or for describing marked-up electronic text. In other words, SGML is a metalanguage, that is, a means of formally describing a language. This makes SGML a language that embodies encoding conventions, for making explicit an interpretation of a text (Sperberg-McQueen and Burnard, 1994) XML is a general-purpose specification for creating custom markup languages. It is classified as an extensible language, because it allows the user to define the mark-up elements. XML's purpose is to aid information systems in sharing structured data, especially via the Internet, to encode documents (W3C, 2006). By leaving the names, allowable hierarchy, and meanings (semantics) of the elements and attributes open and definable by a customizable schema or Document Type Declaration, XML provides a syntactic foundation for creating XML-based markup languages. In the case of XBRL, the semantics is defined by the taxonomies, which are a collection of XML schema documents. Mr Hoffman proposed the idea to the High Tech Task Force of American Institute of Certified Public Accountants (AICPA) and XBRL was born. He and other accounting and technical experts worked together on initial prototypes and business plans for XBRL under the auspices of the AICPA (AICPA 2006). The XBRL International (XII) consortium has been created to further the development of XBRL. XBRL International is a not-for-profit consortium of approximately 550 companies and agencies worldwide working together to build the XBRL language and promote and support its adoption. This
62
collaborative effort began in 1998 and has produced a variety of specifications and taxonomies (XBRL.ORG, 2009) 3.2. SO WHAT DOES XBRL DO AND HOW WILL IT HELP BRING ABOUT TRANSPARENCY? XBRL is an information technology / knowledge management taxonomy (similar to a dictionary) for financial information, not unlike bar-coding. It is a method by which companies will take financial information now reported in a static format and make it interactive. For external users, XBRL will allow financial information to be extracted from the financials and compared instantaneously, potentially saving analysts, regulators and other reporting authorities considerable time while creating a level playing field in terms of information access (Dzinkowski 2008). As such it takes its place amongst the many and varied international standards and technical regulations that have sprouted as part of the significant acceleration of the globalization process over the last decade. This process has not been without challenges, however, including the delays inherent in global stardardisation processes, the ways in which certification business models often take precedence over standardisation itself, and the administrative load on companies of dealing with successive waves of regulation. (Ghiladi, 2003) On 14 May 2008 the US Securities and Exchange Commission (SEC) announced its plan to propose a rule that would require both US and foreign filers to provide their financial statements using an XBRL electronic data tagging taxonomy. Mindful of the need to stagger mandatory requirement for use of the taxonomy, only large filers - about 500 companies with a market capitalisation greater than $5bn (£2.5bn) - are expected to make disclosures in the XBRL format, beginning with fiscal periods ending in late 2008, while foreign filers using International Financial Reporting Standards will follow suit in 2011. Other equity regulators in China, Japan, Korea, Singapore and Spain have already mandated XBRL as the primary filing format. In Japan earlier this year, all public companies were required to provide their company reports in XBRL. While in Canada and other countries around the world, voluntary filing programs are in place, but their security regulators are taking a 'wait and see' approach to mandating them (Dzinkowski 2008). The International Accounting Standards Committee (IASC) Foundation has issued the IFRS Taxonomy 2008 - a complete translation of International Financial Reporting Standards into XBRL. This is an extensible business reporting language that facilitates the filing of, access to, and comparison of financial data. The 2008 taxonomy is the first to undergo an extensive external review by a team of experts, including preparers of financial reports and representatives of securities regulators, central banks, financial institutions and software companies (IASB 2008) The Chartered Financial Analyst (CFA) Institute, the global association of investment professionals, has said it will focus on six initiatives from the final report of the Securities and Exchange Commission's Advisory Committee on Improvements to Financial Reporting. These include the joint financial statement presentation project (which aims to separate operating, financing and investing results), international financial reporting standard convergence, and the development of XBRL (CFA Institute 2008). The Research Information EXchange Mark-up Language (RIXML) Consortium and XBRL have signed a Memorandum of Understanding outlining their intent for mutual collaboration and cooperation. RIXML.org is a consortium of buy-side firms, sell-side firms, and vendors that have joined together to define an open standard for categorizing, tagging, and distributing global investment research information. The RIXML standard provides capabilities to tag research content for end users to be able to search, sort and filter aggregated research. RIXML utilizes XML. Both parties have identified common grounds of opportunity for both organizations. XBRL can provide the financial data structure and definition that is used by Research Analysts, and, RIXML can provide the container and distribution vehicle for the research information product (XBRL.ORG 2007). While on the one hand a lot of attention has been given to XBRL, a number of the major economies are still trying to “wait and see”. It seems like the current priority for them is the harmonisation of domestic ‘generally accepted accounting principles’ (GAAP) with International Financial Reporting Standards (IFRS) and/or International Accounting Standards (IAS). Many businesses in the US have expressed concerns that requiring filers to SEC to adopt XBRL now will result in increased costs with no improvements to internal processes. The adoption of any new technology almost always takes time. Skeptics need some proof so as to be convinced that XBRL is not just another gimmick that costs money but hardly brings in any benefits.
63
XBRL, then, in the context of global financial systems, can be seen to be displaying some characteristics of Heim’s ‘strong’ virtuality, admittedly in a ‘weaker’ form than perhaps Heim envisaged, but certainly ‘stronger’ than the ‘weak’ virtuality which he described. The triple typology of immersion, interaction and intensity are revealing here. Immersion, the reader will recall, “comes from devices that isolate the senses sufficiently to make a person feel transported to another place”, (Heim p6-7). Interaction refers to a computer’s ability to change a virtual scene, in which the user is immersed, in synchronisation with the user’s own movement and point-of-view – for example when moving around in SecondLife or World of Warcraft the backdrop also moves and changes. Information ‘intensity’ defines the degree to which a virtual world can offer users information about their environment. XBRL, as a tool of financial regulation, is located within the virtual realm of transnational financial flows, marking up with transparent metadata every detail of each of transaction and thereby making each transaction accessible to government software. This is immersive, in that financial regulation, through such metadata, becomes intrinsic to mechanism of transaction itself, from which accountancy derives its own data. It is interactive, in that the fact of such immersion immediately changes the global financial scene into one no longer capable of harbouring the kind of obscurity, obfuscation and misleading complexity that brought about the credit crunch, and its information intensity is global and intricate in scale. But there is more to this story, in the very special manner in which XBRL achieves this transparency, through its immersion in the world of virtual finance.
4. INSCRIPTION Two concepts from Bruno Latour are of particular relevance for this story: inscription (Akrich and Latour 1992) and translation (Callon 1991; Latour 1987). Inscription refers to the way artefacts (including code languages) embody patterns or scenarios of use. This is not to suggest that action is hard-wired into an artefact. Halfway between a perspective that would suggest artefacts determine the use and, contrastingly, a perspective suggesting an artefact is always interpreted and used flexibly, inscription can be used to describe how “concrete anticipations and restrictions of future patterns of use are involved in the development and use of a technology” (Hanseth & Monteiro 1998). According to Latour, there is a process in society of continual negotiation, a social process of aligning multiple and disparate interests. Stability therefore rests on the ability to translate, “that is, reinterpret, re-present or appropriate, others' interests to one's own.” (Hanseth & Monteiro 1998) In this sense, all design is translation. Latour (1991) provides an excellent explanatory example of this aspect of his theory: getting guests to leave their keys behind when leaving a hotel. This is a ‘desired pattern of behaviour’ and the problem is how to inscribe this pattern into the network of hotel guests, keys, staff, and so on. This network is what is termed the ‘actor-network’ in actor network theory, and includes both human and non-human actors. The question is how to inscribe a desired pattern of action, which makes sense to all, and into what? Hanseth and Monteiro (1998) take up the story: “This is impossible to know for sure before hand, so management had to make a sequence of trials to test the strength of different inscriptions. In Latour's story, management first tried to inscribe it into an artefact in the form of a sign behind the counter requesting all guests to return the key when leaving. This inscription, however, was not strong enough. Then they tried having a human door-keeper -- with the same result. Management then inscribed it into a key with a metal knob of some weight. By stepwise increasing the weight of the knob, the desired behaviour was finally achieved. Hence, through a succession of translations, the hotels' interests were finally inscribed into a network strong enough – and sensible enough - to impose the desired behaviour on the guests.” (Hanseth and Monteiro 1998). Thus XBRL is – potentially - a heavy key fob in the processes of the financial markets, a tool mandated by central governments that can inscribe transparency and regulation of financial transactions into the global markets and help us prevent the kinds of scandals that we have seen with Enron, Madoff and Satyam, and the kinds of obscurity and obfuscation of complex financial liabilityspreading that brought about the collapse of the sub-prime mortgage market and the subsequent near collapse of the banking system that had bought into it, let alone the global economic downturn that has followed on the heels of the banking crisis.
64
5. HEGEMONIC SHIFT This process of inscribing transparency in the virtual world of global financial markets through a global virtual standardization tool – XBRL – constitutes, moreover, a cultural, as much as an economic shift. This process, the authors argue, is part of a wider political and social change in society as a whole. One philosopher whose writings have had a profound affect on our understanding of power in society is Antonio Gramsci, who “recognised that social power is not a simple matter of domination on the one hand and subordination or resistance on the other.” Gramsci thus re-evaluated traditional Marxist understandings of modern capitalist societies by arguing that rather than being determined by underlying economic necessities, culture and politics formed a web of relations with the economy in which there is a continual shift of emphasis and influence. For this process he coined the term hegemony. “Rather than imposing their will,” Gramsci maintained, “‘dominant’ groups (or, more precisely, dominant alliances, coalitions or blocs) within democratic societies generally govern with a good degree of consent from the people they rule,” - they achieve hegemony - “and the maintenance of that consent is dependent upon an incessant repositioning of the relationship between rulers and ruled.” (Jones 2006:3). Insidiously, a dominant bloc, in order to maintain its dominance, must be able to “reach into the minds and lives of its subordinates, exercising its power as what appears to be a free expression of their own interests and desires.” (Jones 2006:4). This aspect of unwitting collusion on the part of the ruled with the strategies and tactics of their rulers is perhaps the best known feature of Gramsci’s concept of cultural hegemony; that those strategies and tactics must constantly adapt to the shifting needs of the ruled is perhaps less appreciated. Dick Hebdige’s work on subcultural groups perhaps expresses this dynamic best. A simple example of this approach is that of Punk. In the late 1970s, the wearing of safety pins in one’s ear and of torn fabrics loosely arranged as clothing was a statement of rebellion, of rejection of fashion - similar to Dada earlier in the century (Hebdige 1979). By the early 1980s this ‘look,’ however, had become a fashion in itself. What was revolutionary had been absorbed, packaged, and sold back to the revolutionaries. This dominance of the market economy – in today’s globalised world by a supra-national class – is something in which we are perhaps witnessing, in this current crisis, a subtle but profound shift, and XBRL may indeed be one small sign of how this shift is taking place. XBRL, in short, the authors surmise, may indeed be part of an inscription that seeks to return hegemonic control of transnational capitalism to democratically elected governments, after a period in which it has been left very much to its own devices. By encouraging the financial classes to adopt a more efficient and global accounting language, that will speed up acquisitions and mergers, whilst increasing transparency, regulators may indeed be reigning in the markets and making traders more accountable. Here are echoes of Roosevelt and the New Deal, which represented at the time a significant shift in political and domestic policy in the U.S., with its most lasting changes being an increased government control over the economy and money supply; intervention to control prices and agricultural production; and the beginning of the federal welfare state. President Bush, and here in the UK Prime Minister Brown, were forced into the part-nationalisation of many financial institutions in the Autumn of 2008. It is no secret that President Obama is extending many of the government controls over the economy that were part of the New Deal.
6. CONCLUSION In this paper we have discussed the new XML based eXtensible Business Reporting Language (XBRL) as a form of virtuality, neither strong nor weak in Heim’s terms, which nonetheless carries a Latourian inscription of the transparency in accounting and financial information that US and other government regulators are seeking to impose on transnational corporations in the wake of the “credit crunch” financial crisis of 2008/09. This has been seen, moreover, as potentially at the tipping point of a hegemonic shift in power, between transnational capitalists and government regulators, drawing accountants and financiers into the grip of regulators by consent, as well as by law. This image of virtuality as both medium of transnational capitalism through digital markets, and tool of social policy through financial regulation, opens up a middle path between Heim’s strong virtuality and its strictly technological and immersive virtual reality, and his weak virtuality and its loose and associative clustering around all things computerised and digitally mediated. In the realm where XBRL is set to translate its inscription, technology and social policy are united and interlinked, at the nub of a geopolitical struggle between transnational capitalists and government regulators for supremacy over the market.
65
References AICPA News Releases December, 2006. Retrieved from http://www.aicpa.org/download/news/2006/Charles_Hoffman_Recives_AICPA_Special_Re cognition_Award_12-11-06.pdf Akrich and Latour, (1992) A Summary of a Convenient Vocabulary for the Semiotics of Human and Nonhuman Assemblies, in Bijker & Law (1992) - Shaping Technology / Building Society: Studies in Sociotechnical Change. MIT Press Al Jazeera News Feature (2009) Retrieved from http://english.aljazeera.net/business/2009/01/20091881246504212.html BBC News feature (2006) Retrieved from http://news.bbc.co.uk/1/hi/business/3398913.stm BBC News feature (2008) Retrieved from http://news.bbc.co.uk/1/hi/business/7783236.stm BBC News feature (2009) Retrieved from http://news.bbc.co.uk/1/hi/business/7521250.stm Bhatnagar, S. (2004). eGovernment: From Vision to Implementation. London: Sage Callon, Michel (1991) Techno-economic networks and irreversibility. In Law, J (ed) A Sociology of Monsters: Essays on Power, Technology and Domination. (Pp. 132-165) London: Routledge. Castells, M. (2000). Rise of the Network Society. (2nd ed.) London: Blackwell
CFA Institute Press Releases August 2008. Retrieved from http://www.cfainstitute.org/aboutus/press/release/08releases/20080819_01.html Commens Retrieved from http://www.helsinki.fi/science/commens/terms/virtual.html
Dzinkowski, R. (2008) Accountancy Magazine. (July) Fison, M. (2009) World needs better, global financial regulation, says FSA chief http://www.citywire.co.uk/Adviser/-/news/adviser-news/content.aspx?ID=326935 Ghiladi, V, (2003) The Importance of International Standards for Globally Operating Businesses J. of IT Standards & Standardization Research, 1(1),54-56 Idea Group, Hershey PA Hanseth, O & Monteiro, E (1998) Understanding Information Infrastructure. Retrieved from http://heim.ifi.uio.no/~oleha/Publications/bok.html Hasselstrom, A. (2003). Real-time, Real-place Market: Transnational Connections and Disconnections in Financial Markets, in Garsten, C and Wulff, H (eds.) New Technologies at Work. (pp 69-89). Oxford, England: Berg. Haynes, D. (2004). Metadata for information management and retrieval. London: Facet Hebdige, D, (1979). Subculture: The Meaning of Style London: Routledge Heim, M. (1998). Virtual Realism. Oxford, Enlgand: Oxford University Press Horrocks, C. (2000). Marshall McLuhan and Virtuality. Cambridge, England: Icon Books
IASB Press Releases August, 2008. Retrieved from http://www.iasb.org/News/XBRL/The+IASC+Foundation+publishes+the+IFRS+Taxonomy +Guide.htm Jones, S (2006). Antonio Gramsci London: Routledge Latour, B. (1987). Science in Action. Cambridge, Mass: Harvard University Press. Latour, B. (1991). Technology is society made durable. In Law, J. (Ed). A Sociology of Monsters. Essays on Power, Technology and Domination. (pp103-131). London: Routledge Schiller, D. (2000). Digital Capitalism. Massechusetts, US: MIT Press Skagestad, P. Peirce, Virtuality, and Semiotic. University of Massachusetts – Lowell Retrieved from http://www.bu.edu/wcp/Papers/Cogn/CognSkag.htm
Sperberg-McQueen, C. M. and Burnard, L. (1994). A Gentle Introduction to SGML. Retrieved from http://www.isgmlug.org/sgmlhelp/g-sg.htm Tozer, G. (1999) Metadata Management for Information Control and Business Success. Boston, MA: Artech House UK Office of the e-Envoy, (2003) eGovernment Metadata Standards
66
W3C (2006). eXtensible Markup Language (XML) 1.0 (Fourth Edition). W3C Recommendation. 16 August 2006. Retrieved from http://www.w3.org/TR/2006/REC-xml-20060816/#sec-origin-goals Webster, F (2003) Theories of the Information Society, 2nd ed, Routledge, London
XBRL.ORG. (2009). Retrieved 29 January 2009, from http://www.xbrl.org/ XBRL.ORG. (2007) Latest News June 1 2007. Retrieved from http://www.xbrl.org/RIXML/XBRLRIXML-MoU.htm
67
NON GOVERNMENTAL ORGANIZATIONS IN GREECE GONE VIRTUAL Symeon Ververidis, Panteio University of Athens, Dept. of Psychology, ver.sym@gmail.com Iraklis Varlamis, Harokopio University of Athens, Dept. of Informatics and Telematics, varlamis@hua.gr Abstract The aim of this research work is to evaluate the penetration of Web 2.0 technologies into Non Governmental Organizations in Greece. For this, we have chosen 56 NGOs and evaluated them using a set of 81 criteria divided into subsets. Each subset of criteria evaluates a different aspect of NGO, i.e. dissemination of ideas, interconnection with other NGOs, members’ interaction etc. but is always under the prism of the effective use of Web technologies. The selected sample comprises mainly Civil Society Organizations (CSO’s) and Environmental NGOs that act in a national level and have a web presence. The details of the sample are also discussed in this work. The aim of this work is bi-fold: first, to present the current state of NGOs in Greece as far as it concerns the exploitation of Web and Web 2.0 technologies and second, to provide an extended set of criteria that can be used for evaluating the effective use of Web technologies and can be utilized by NGOs to ameliorate their Web presence and support their aims.
INTRODUCTION The Globalisation trend that dominated the second part of the 20th century gave rise to the importance of NGOs especially during the last two decades. As a counterbalance to the International treaties and organizations, such as the World Trade Organization that focused on the interests of enterprises, Non Governmental Organizations (NGO) emphasised on humanitarian issues, developmental aid and sustainable development. Gerard Clark (Clark, 1998) defines NGOs as “…private, non profit, professional organizations with a distinctive legal character, concerned with public welfare goals.” NGOs have not been established by governments or agreements among governments and comprise individuals and private associations, rather than states. They develop a wide range of initiatives and actions in local, national, or international level. Especially, when governments are not able to provide fundamental social services, such as health or education, due to economic, social, geographical or political reasons, NGOs attempt to fill these gaps. They increase public awareness on a particular cause or set of causes and contribute to democracy through challenging governments and promoting social interests. In several cases, they supplement governmental efforts and they seek assistance from government or private organizations. They respond with greater ease and flexibility to local needs, since they are close to grass-root community structures and their participation in the decision making process is necessary. It is obvious, from the above, that they play a substantial role in the democratization of society. Apart from recreational associations, philanthropic groups or relief organizations that address emergencies, an important category of NGOs are the “transformational NGOs” (Stromquist, 2008) that aim in advancing democracy, providing social justice and constituting a better world in general. Transformational NGOs, which are the subject of this research, have shown the ability to set agendas, negotiate outcomes, confer legitimacy, and implement solutions (Simmons, 1998). Non Governmental Organizations found in Web a powerful new tool for recruiting volunteers, disseminating information and building awareness. Grace to the advances in Information and Communication Technology, NGOs have less place and time barriers and are able to expand their activities worldwide and increase their impact on population. By going virtual, NGOs can enhance and improve their activities and formulate their networks of collaboration at local, regional and international levels. The low cost and world-wide coverage of the Web together with the collaborative nature of Web 2.0 technologies makes them a perfect solution for the dissemination of ideas and the promotion of NGOs’ activities. Using the new social media, they are in position of providing a new civil agenda. More and more NGOs’ websites comprise: news updates concerning activities taken by individuals or assemblies, forums and chat rooms for face-to-face discussions, articles written by
68
known journalists and political analysts that contribute to the public opinion, information about the current debates and social conflicts. NGOs in Greece comprise a dynamic and considerable part of the Civil Society. Their presence is notable and their activities are intense due to the large financing from both the Greek state and the European Union and their constitutional consolidation. Nevertheless, cooperation and communication between NGOs and among members is still problematic. Greek NGOs don’t have any tradition of collaboration and working collectively. They rarely fashioned any form of civic networks and it is only recently that they began to exchange information and resources. According to Grigoriou (Grigoriou, 2007) Greek NGOs, nearly to their whole, lack in sustaining satisfactory bonding mechanisms. Despite the huge number of NGOs in Greece and world-wide and the great publicity of Web and Web 2.0 services, there are not currently any studies on the web presence of NGOs. The current research examines whether Greek NGOs make effective use of the new media. Specifically, it goes one step beyond the simple online presence and investigates in what degree NGOs establish a web based communication with their members. Apart from the typical accessibility and usability tests we measure the effectiveness of NGOs’ services and the members’ satisfaction and contribution. The critical question to be answered from this thorough examination is whether the Greek NGOs use the internet and the web only as a passive tool for promotion and dissemination of information, or as an active instrument for interactive communication. The contributions of this work include among others: a) a detailed set of criteria for evaluating the web presence of NGOs and other organizations that require members’ participation, b) an evaluation of a subset of Greek NGOs that can be characterized as “transformational”. Such NGOs participate in everyday local life and contribute in ways in which individuals produce and improve civil society (Hauss, 2003). The following section gives a short introduction on NGOs and an overview of related work in evaluating the web presence of organizations. Section 3 summarizes the state of NGOs in Greece, the criteria we employed in our evaluation and the specific features of our sample dataset. Section 4, illustrates and discusses the outcomes of this research work and section 5 concludes and gives useful suggestions for future work on this area.
RELATED WORK The term “non governmental organizations” first appeared in article 71 of the UN Charter (1945), which recognized the “consultative status” of national and international NGOs. Recently it also appeared in Greek legislation: in laws N. 2731/1999 on international aid (articles 10 to 17) and Ν. 2646/1998 on the development of a national system for social welfare (article 12). NGO TYPES According to Stromquist (Stromquist, 1998), NGOs comprise of non governmental, volunteer organizations of civil society, which provide services to weak social groups or individuals, develop support programs for local societies and mediate for sustainable development in co-operation with the state or other agents. The NGO global network (www.ngo.org) defines NGOs as volunteer non profit associations, independent from the state and political parties which work for the public good and allow participation of all individuals regardless of race, ethnicity, religion and gender. Thus, any independent, non politically affiliated, non profit organization, characterized by its volunteer participation, which aims at providing information services, social support and knowledge, with a well defined social orientation can be characterized as NGO. It is important to note, that the term “non governmental” refers to all possible forms of governance and not just the state government, such as local, regional and national government but transnational (such as the EU institutions) and international through regional and international organizations. Several research works have attempted to classify NGOs using different criteria. For example, Michael O’Neill (O’Neill, 1990) organized NGOs in 9 groups according to their thematic orientation: religion, research, pharmaceutics, culture, society, law, world aid, medical prevention, and promotion. He suggested more classification schemes based on the geographical field, the legal status and the type of membership. William Cousins (Cousins, 1991) organized NGOs based on their orientation (charity, service providing, cooperation, public awareness), or their level of impact (from community to international NGOs). A review work by Giannis (Giannis, 2004) resulted on 4 quantitative: geographic area, legal status, membership type, work field and 5 qualitative criteria: impact, openness, organization and resources, subject and intervention role.
69
Due to the multitude of NGOs and the intrinsic differences in aims and operation it is infeasible to perform an evaluation of all NGOs. For this reason, our research focuses on a specific category of NGOs. The NGOs of our sample operate in National level, are self funded, open to volunteers and mid-sized in terms of financial and human resources. Their main orientation is to increase public awareness and they intervene in the operation of the society. According to Stromquist (2008) these NGOs are called transformational NGOs and are central to everyday local and global life, for they contribute to the ways in which individuals produce and improve civil society. A proper sample must be evaluated using the proper and complete set of criteria. In order to justify the selection of criteria used in our evaluation, we present evaluation works in related fields. EVALUATION OF WEB PRESENCE Broadly speaking, the NGO evaluation's criteria were adapted from those used to evaluate general internet resources, in much the same way as criteria used to evaluate internet resources had been adapted from those used for print material. The more quantitative studies undertaken by Gibson and Ward (2000) and Norris (2001) provide a method for making comparisons between NGOs and therefore have been used, with minor adaptation, in this study. Apart from Gibson και Ward, there are also other ways to categorize and evaluate a political site like for example the Conway & Dorner (2004) research or the seven general criteria of the Hiser Group. Gibson and Ward's study was designed to evaluate two central aspects of political party websites: their purpose and function, and their effectiveness in delivering these functions. These researchers used a coding scheme that measured fifty different criteria numerically, thereby providing for objective evaluation and comparison. There are numerous works that evaluate web presence for educational (Jin and Yoo, 2004), governmental, healthcare (Zaphiris and Kurniawan, 2001), non-profit or professional organizations (Loiacono and McCoy, 2006). Works of this type present useful tools and metrics for the evaluation of accessibility and usability issues. However, these methodologies are not sufficient to evaluate the social part of web sites, which is defined by publicity and participation (Christou, 2007). The works of (Gibson and Ward, 2002), (Gibson et al.2003), (Korsten and Bothma, 2005) or that of (Stieglitz et al, 2008) that compares the impact of social software on NGOs’ members can be more useful in this direction.
METHODOLOGY SAMPLE SELECTION Although the exact number of Greek NGOs is not officially known, they are estimated to be thousands. Their number and the publicity they enjoy have significantly increased the last decade; however the absence of a transparent institutional framework is obvious. A full mapmaking of the NGO landscape is imperative and urgent. The first step should be to gather and merge the records kept by the Greek ministries (e.g. EKKE environmental group, 2001) into one complete list (Papaioannou, 2000). The gathering of all the necessary data that was required for this research consisted primarily of the contents of Greek NGO’s sites that share a common characteristic: through their program of actions they actively try to formulate with the help of the state the political decisions of our country. Therefore, the NGOs in our sample are the key players that don’t stay at words or theoretical approaches but act as leaders in their specific section. Using a three element selection criteria, the study focused on non-profit civil society organizations that: 1) were institutionally independent from both State and private enterprise, 2) promote innovative forms of collective action, and 3) use ICTs and the Internet (either through their own websites or e-mail). Traditional organizations such as neighborhood clubs, churches, school cooperatives, political parties, elderly people day centers were excluded, from our research. The complete list of NGOs we evaluated is available in Appendix A. CRITERIA SELECTION As far as it concerns the evaluation, based on the work of Ward and Norris, we conclude on 6 groups of criteria: a) Descriptive information, b) Structure, c) Content, d) Navigation, e) Morphology, f) Participation. The first group includes the site’s descriptive information: who is the webmaster or/and the administrator, who updates the content, how users communicate their issues, which is the site’s
70
purpose and if it is well served, etc. The second group of criteria targets the site’s structure, the effectiveness of the site’s functions and the way the information is presented. The third group is about the usefulness, adequacy, credibility and validity of the content that is used. The fourth group focuses on the functions and the services that define the way the user navigate through the site. The fifth group examines the technical data and prescriptions. Finally, the last group of criteria focuses on the website’s reputation, credibility, trust and value as well as in the participation of NGOs members in the site’s content. The techniques we employed for measuring reputation are backlinks (Yan and Zhu, 2008) provided by Google and traffic rankings (Malacinski et al., 2001) provided by Alexa. We also measured members’ participation in blogs, forums and other open discussion services provided by the NGO website. To synchronize data gathering activities for the different methodologies, all activities were scheduled to take place within the same period (01-25 January 2009). Care was taken that data gathering did not take place over a period where heightened interest in the Web site could have influenced frequency of user visits or satisfaction with information on the site. The complete list of criteria can be found in Appendix B.
RESULTS In order to give a clearer view on the web presence of NGOs, we decided to present partial summaries of our results for each group of criteria. Of course a quantification of the results must be performed. Such quantifications are common in web site evaluation (Zaphiris and Kurniawan, 2001) and help in the presentation and analysis of results. Therefore, we decide to give 1 point for each criterion that is satisfied and 0 for not satisfied criteria. In the case of complete absence of information for a criterion we decide to punish the NGO with a negative mark (-1). In the criteria that are already quantified (e.g. number of broken links, number of script errors) several thresholds are used to define the positive or negative marks. Consequently, we sum up the score in each sub-group of criteria for every NGO and map scores into a 5-level scale: High is for NGOs that satisfy more than 80% of the criteria of the subgroup, good is for those NGOs that satisfy the 60-80% of the criteria, average for 40-60%, low for 2040% and very low when less than 20% of the criteria are matched. Finally, we present the percentage of NGOs in each level for the 6 different criteria sub-groups in the charts of Figure 1. The quantification performed is simple and is mainly done for presentation reasons. In order to allow a different quantification of results in the future, we make available the detailed evaluation scores in appendix C.
Navigation
Descriptive Information
71
Structure
Morphology - Accessibility
Content
Publicity - Participation
Figure 1. Summary of the NGOs’ performance on the different set of criteria From the results presented in Figure 1, it is obvious that many NGOs’ websites are well structured and provide sufficient descriptive information. As far as it concerns the appropriateness of content and the publicity of the site, only half of the NGOs are above average (47% and 53% respectively). The results are more disappointing in terms of navigation and accessibility, which are neglected (scores are low or very low) from 3 out of 4 NGOs in average. In a second step, we attempted to depict the response of Greek NGOs to the Web 2.0 trend by measuring the community services they offer to their members. Criteria 49-58 check whether the website offers a particular Web 2.0 service or not. The results, in Table 1, concerning the penetration of Web 2.0 services into NGO websites are very disappointing. Forums, RSS feeds, and E-votings are the most popular services among NGOs, but still are available in less than 10% of the sites. Given the fact that the NGOs of the sample are strongly connected with public awareness and people’s participation in decision making and in acting in common, the results are even more disappointing. Web 2.0 service
NGOs offering the service
Forum
10%
Blog
3%
Podcasting
0%
Wiki
0%
RSS Feed
7%
e‐Magazine
7%
Citizens’ Panel
0%
e‐Voting
12%
e‐Poll
3%
e‐Petition
0%
Table 1. Percentage of NGOs offering a particular Web 2.0 service
72
In total, a large 74% of the sites completely ignore Web 2.0 services, 18% of the sites offer only one of the above services and 8% of them offer more than one services. These poor results aren’t only a Greek phenomenon. The low usage of web 2.0 is a phenomenon that characterizes all the developing countries. Both political parties and political organisations of any kind through the whole world suffer from the same illnesses: their solitary objectives and actions. Nevertheless, in every rule there is an exception. Nicolas’ Sarkozy political office in Second Life, Angela’s Merkel and David’s Cameron weblogs and the latest victory of Obama which was based on the virtual social networks are indeed notable examples. In our country the volunteered based virtual communities showed their will and strength in rather few occasions, with the most notable the cases of a teenager that was shot by a policeman and the fire in Peloponnesus two years ago. Compared to sites from NGOs that act internationally like for example Greenpeace or Doctors Without Frontiers, our sample -and subsequently political organizations on the whole- fall short as far as the usage of web 2.0 is concerned. But of course we have to keep in mind that this is rather expected, due to the enormous differences between them -target group, financing methods, etc. Things are very different, though, when we examine corporations or even small companies, because they tend to use more effectively the most of the web 2.0 tools. Of course this is far from our point of research, due to the fact that the necessary volunteered character is absent, since content of these blogs, forums, etc is updated by an employee. As it has already mentioned before, the aim of this research was not only to deliver a thorough analysis of the use of web 2.0 technologies in NGOs in Greece, but to specify a set of criteria that can be used on future works and apply not only on Greek transformational NGOs, but on other types of organizations with active virtual presence as well. In other words, this was only a start for the next steps, because further research is needed, if we want to fully enhance the understanding in this area.
CONCLUSIONS This research attempted to assess NGOs’ websites in terms of usability of design and content and in practicability of services. The use of Web 2.0 social tools in the service of NGO members has been also studied. The research has been focused on a subset of Greek NGOs, which capitalize on members’ participation and contribution and on the immediate dissemination of news and events, since we had the intuition that they would have invested more in the Web as a communication and collaboration medium. We employed a significant set of criteria that cover all possible usability and sociability aspects and carefully assessed our sample. The analysis involved both descriptive and inferential features and revealed that the websites were used mainly as static promotional tools instead of promoting members’ participation. Although, the results are not very encouraging, as far as it concerns the virtualization of NGOs through the use of new media, we believe that it is still place for Greek NGOs to expand and adapt to the new technologies. It is on our next plans to expand this research to more NGOs that have different features and orientation. It would also be very interesting to re-evaluate our sample after a period of time and evaluate the comparative results.
REFERENCES Christou, C., (2007), The role of NGOs in today's governance complex: the contribution of technological innovation, (in Greek: «∆ιευρεύνηση του ρόλου των ΜΚΟ στο σύγχρονο πλέγµα διακυβέρνησης: η συµβολή των τεχνολογικών νεωτερισµών»), Diploma Thesis in Computers and Electronics Engineering, National Technical University of Athens, retrieved January 15, 2009 from http://artemis.cslab.ntua.gr:80/Dienst/UI/1.0/Display/artemis.ntua.ece/DT2007-0093 Clark, G., (1998), Non-Governmental Organizations (NGOs) and Politics in the Developing World, Political Studies 46:36-52. Conway, M., Dorner, D., (2004), An evaluation of New Zealand political party Websites, Information Research, Vol. 9 No. 4. Cousins, W., (1991), Non-Governmental Initiatives, ADB, The Urban Poor and Basic Infrastructure Services in Asia and the Pacific, Asian Development Bank, Manila. EKKE environmental group, (1997), Research on Ecological-Environmental NGOs, (in Greek: Οµάδα περιβάλλοντος του ΕΚΚΕ, «Έρευνα για τις Οικολογικές-Περιβαλλοντικές MKO»), retrieved January 15, 2009 from http://www.ekke.gr/estia/gr_pages/mko_po/static.htm
73
Finquelievich, S. (2004) Community Networks Go Virtual: Tracing the Evolution of ICT in Buenos Aires and Montevideo, Instituto de Investigaciones Gino Germani, Faculty of Social Studies, University of Buenos Aires, Argentina. Giannis, N., (2004), Participatory democracy and civil society: the European path (in Greek: «Συµµετοχική δηµοκρατία ή κοινωνία πολιτών: η ευρωπαϊκή πορεία», Φιλελεύθερη Εµφαση, Ιανουάριος-Φεβρουάριος-Μάρτιος 2004, τεύχος 18), retrieved January 15, 2009 from http://www.nicosyannis.gr/index.php?option=com_content&task=view&id=16&Itemid=7 Gibson, R., Ward, S. (2000) A proposed methodology for studying the function and effectiveness of party and candidate Websites, Social Science Computer Review, 18(3). Gibson, R., & Ward, S. (2002) “Virtual campaigning: Australian parties and the impact of the internet”, Australian Journal of Political Science, 37(1). Gibson, R., Margolis, M., Resnick, D., & Ward, S. (2003) “Election campaigning on the WWW in the USA and UK: a comparative analysis”, Party Politics, 9(1). Grigoriou, G. (2007), Interconnecting NGOs – common platforms for consultation and action (in Greek: «∆ικτύωση ΜΚΟ - κοινές πλατφόρµες διαβούλευσης και δράσης), 1st Panhellenic Consultation on Modern Institutional Framework for the voluntary and NGOs, retrieved January 15, 2009 from http://sybedrio2007.bee.gr/data/speeches/Grhgoriou.doc Hauss, C. (2003). Civil Society. Beyond Intractability. Eds. Guy Burgess and Heidi Burgess. Conflict Research Consortium, University of Colorado, Boulder, retrieved January 15, 2009 from http://www.beyondintractability.org/essay/civil_society/ Korsten, H., Bothma, T.J.D., (2005), ‘Evaluating South African government Web sites: methods, findings and recommendations (Part 2)’, South African Journal of Information Management, vol. 7, no.3 Loiacono, E.T., McCoy, S., (2006), Website accessibility: a cross-sector comparison. Universal Access in the Information Society Journal, Vol 4(4): 393-399 Malacinski, A., Dominick, S., and Hartrick, T., (2001), Measuring Web Traffic, IBM developers’ network, retrieved January 15, 2009 from http://www-106.ibm.com/developerworks/web/library/wa-mwt1/ Norris, P. (2001) Digital divide: civic engagement, information poverty, and the internet worldwide, Cambridge University Press, Cambridge. O’Neill, M. (1990) The Third America: the Emergence of the Non-Profit Sec-tor in the United States San Francisco: Jossey-Bass Clark. Papaioanou, K., (2000), Greek NGOs, a landscape in the mist (in Greek: Ελληνικές Μη Κυβερνητικές Οργανώσεις: τοπίο στην οµίχλη, ∆ικαιωµατικά, τεύχος 5), retrieved January 15, 2009 from http://www.greekhelsinki.gr/dikaiomatika/05/stiles/epwnymws/01.htm Simmons, P. J. (1998) Learning to live with NGOs, Foreign Policy 112, retrieved January 15, 2009 from http://www.globalpolicy.org/ngos/intro/general/1998/simmons.htm Stieglitz, S.; Schneider, A.-M.; Lattemann, C., (2008), The Impact of Social Software on Customer Decision Making Processes. In: Proceedings of the “8th Annual Conference of the Academy of E-Business (IAEB)”, San Francisco, USA Stromquist, N., (1998), NGOs in a new paradigm of civil society, Current Issues in Comparative Education, Vol. 1 Stromquist, N. (2008) Revisiting Transformational NGOs in the Context of Contemporary Society, Current Issues in Comparative Education, Vol. 10 Yan, E., Zhu, Q., (2008), Hyperlink analysis for government websites of Chinese provincial capitals, Scientometrics (2008) 76:315-326, August 01, 2008 Yoo, S., Jin, J., (2004), Evaluation of the home page of the top 100 university websites, in Proceedings of the Academy of Information and Management Sciences, Volume 8, Number 2. Zaphiris, P., Kurniawan, S.H. (2001), Usability and Accessibility Comparison of Governmental, Organizational, Educational and Commercial Aging/Health-Related Web Sites. WebNet Journal: Internet Technologies, Applications, Issues, 3(3), 2001, 45-52. AACE Press, Norfolk, USA.
74
APPENDIX A - NGO LIST 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51.
ΑΓΟΡΑΙ∆ΕΩΝ: www.agoraideon.eu Αναπτυξιακή Συνεργασία και Αλληλεγγύη: www.dcsngo.org.gr Αντιπυρηνικό Παρατηρητήριο Μεσογείου: http://www.manw.org Ατλαντική-Ευρωπαϊκή Ένωση: http://gaaec.org Γέφυρες Επικοινωνίας: http://www.noimatiki-ge.gr Γραφείο της Κοινωνίας των Πολιτών: http://www.civilsociety.gr ∆ιεθνές Αναπτυξιακό Κέντρο: http://www.dak.gr ∆ιεθνής Οργάνωση Βιοπολιτικής: http://www.biopolitics.gr ∆ιεθνής ∆ιαφάνεια Ελλάς: http://www.transparency.gr ∆ιεπιστηµονικό Ινστιτούτο Περιβαλλοντικών Ερευνών (∆ΙΠΕ): http://www.dipe.gr/indexgr.htm Εθνικό Κέντρο Περιβάλλοντος και Αειφόρου Ανάπτυξης: http://www.ekpaa.gr Εθνικό Συµβούλιο Νεολαίας: http://www.esyn.gr Ελληνική Επιτροπή ΜΚΟ για την Ανάπτυξη: http://www.dev-ngos.gr Ελληνική Επιτροπή ∆ιεθνούς ∆ηµοκρατικής Αλληλεγγύης: http://www.eedda.gr Ελληνική Εταιρεία για την Προστασία του Περιβάλλοντος και της Πολιτιστικής Κληρονοµιάς: http://www.ellinikietairia.gr Ελληνική Εταιρία ∆ιοικήσεως Επιχειρήσεων: www.eede.gr Ελληνικό Ινστιτούτο Ανάπτυξης και Συνεργασίας: http://www.entre.gr Ελληνικό Ινστιτούτο Επικοινωνίας: http://www.hli.gr Ελληνικό Κέντρο Ευρωπαϊκών Μελετών και Ερευνών: http://www.ekeme.gr Ελληνικό Κέντρο Προώθησης του Εθελοντισµού: http://www.anthropos.gr Ελληνικός Κόµβος Επαγρύπνησης για ένα Ασφαλέστερο ∆ιαδίκτυο: http://www.saferinternet.gr Ελληνικό Παρατηρητήριο των Συµφωνιών του Ελσίνκι (ΕΠΣΕ): http://cm.greekhelsinki.gr Ελληνικό Περιφερειακό Αναπτυξιακό Κέντρο: http://www.hrdc.org.gr Ένωση Πολιτών για την Παρέµβαση: http://www.paremvassi.gr Ένωση SD-MED: http://www.sd-med.org Ευρωπαϊκή Έκφραση: http://www.ekfrasi.gr Ευρωπαϊκή Κινητικότητα: http://www.geniusmobility.eu Ευρωπαϊκή Προοπτική: www.europers.gr Ευρωπαϊκό Κέντρο ∆ηµοσίου ∆ικαίου: www.eplc.gr Ευρωπαϊκό Πολιτιστικό και Ερευνητικό Κέντρο Αθηνών: http://www.epeka.gr Ευρωπαϊκό Forum Εθελοντικών Οργανώσεων: http://www.olimazi.eu Έργο Πολιτών: http://www.ergopoliton.gr Ινστιτούτο Μεταναστευτικής Πολιτικής (ΙΜΕΠΟ): http://www.imepo.gr Κέντρο ∆ιάδοσης Νέων Τεχνολογιών Magnum Opus: http://www.magnumopus.gr Κέντρο Έρευνας και ∆ράσης για την Ειρήνη (ΚΕ∆Ε): http://www.kede.org Κέντρο Ερευνών Ρίζες: http://www.roots-research-center.gr Κίνηση Πολιτών: http://www.kinisipoliton.gr Οµοσπονδία Εθελοντικών Μη Κυβερνητικών Οργανώσεων Ελλάδος: http://www.ngofederation.gr Περιφερειακό Κέντρο Πληροφόρησης του ΟΗΕ (UNRIC): http://www.unric.org Πρότυπη Πόλη: http://www.protypipoli.gr Σύλλογος Νέων για τα Ηνωµένα Εθνη – Ελλάδα: http://www.ysun-greece.org Access2democracy: http://www.access2democracy.org ANCE: http://www.ance-hellas.org Ecocity: http://www.ecocity.gr Ecumenica: http://www.ecumenica.net ECOWEEK: http://www.ecoweek.gr EURONET: http://www.euronet.org.gr Euroscience: http://www.euroscience.gr Fair Trade Hellas: http://www.fairtrade.gr Gov2u: http://www.gov2u.org Inten-Synergy: http://www.intensynergy.org
APPENDIX B – CRITERIA AXIS 1: DESCRIPTIVE INFORMATION 1. Is it clear who maintains the site? 2. Is it clear who the site administrator is? 3. Is it clear who updates the content?
75
4. Is there any report about when the site was created? 5. Is there any report about the last update? 6. Is it clear who the site sponsor is? AXIS 2: STRUCTURE 7. Can the users express their opinion and leave a comment? 8. Is the information of the home page well organized? 9. Is the site’s information well organized? 10. Are the site’s functions well organized? 11. Is the site’s representation well organized? AXIS 3: CONTENT 12. Is there a logo on the home page? 13. Can the user read the articles of association? 14. Can the user read the organization chart of the NGO? 15. Can the user read the biographies of the members? 16. Is the program of actions posted? 17. Is there any description of these actions? 18. Is there any description of how these actions will be achieved? 19. Are the sources of information noted? 20. Are there any written speeches or interviews? 21. Are there any audio speeches or interviews? 22. Are there any video speeches or interviews? 23. Are there any links with similar Greek sites? 24. Are there any links with similar non Greek sites? 25. Are there any links with non similar Greek sites? 26. Are there any links with non similar non Greek sites? 27. Is there a wallboard? 28. Are there any pictures? 29. Are there any graphic files? 30. Are there any video files? 31. Is 3D animation used? 32. Is information labelling used? 33. Can you subscribe to a newsletter? 34. Are there any commercials? AXIS 4: NAVIGATION 35. Are there any menus? 36. Do the links open in the same window? 37. Do the links open in a new window? 38. Is there a sitemap? 39. Is the menus’ function effective? 40. Is there an internal search engine? 41. Is the internal search engine’s function effective? 42. Is there an external search engine? 43. Is the external search engine’s function effective? 44. Can you go back to the homepage at anytime? 45. Are the blind and deaf users supported? 46. Is obsolete technology supported? 47. Can the user change the site’s language? 48. How many languages are available? 49. Is there a forum? 50. Is there a blog? 51. Is there any podcast broadcasting? 52. Is there a wiki? 53. Is there a RSS feed? 54. Is there an e-magazine? 55. Is there a citizens’ panel? 56. Is there an e-voting? 57. Is there an e-poll? 58. Is there an e- petition? AXIS 5: MORPHOLOGY - ACCESSIBILITY 59. Can the user communicate with the NGO in conventional ways (tel., fax, mail) ways? 60. Can the user communicate with the NGO in non conventional ways (chat) ways? 61. Is the transfer’s speed from one page to another satisfactory? 62. Is the loading time of the multimedia files satisfactory? 63. Is the sitemap’s function effective?
76
64. How many are the broken and the dead links? 65. Can the user change the size of the fonts?1 66. Does the user have to scroll vertically or horizontally? 67. Is there any interactivity? 68. Is there the printer friendly view available? 69. Are there any picture thumbnails? 70. Can the user download the files in a pdf or a .doc? 71. Does the site functions as a portal? 72. How many are the script’s errors? AXIS 6: PUBLICITY - PARTICIPATION 73. How many are the posts? 74. How many are the messages? 75. How many users visit the site on a daily base? 76. How many are the Google’s backlinks? 77. Is there a reference on the Greek version of the Wikipedia? 78. How many are the NGO’s members? 79. Is there a profile in Facebook? 80. How many are the visitors? 81. Which is the site’s traffic rank by Alexa?
77
APPENDIX C – DETAILED RESULTS
78
CULTURAL AND SOCIAL INHIBITORS TO THE ADOPTION OF VIRTUALITY IN THE WORKPLACE. Elizabeth Duncan Remote Work Management Edinburgh www.remote-work.com Abstract Understandings of virtuality differ. Virtual organizations have operated for many years. The difference in this century is that technology has created environments for distributed, supported work teams, regardless of distance. The benefits to individuals and to organizations are paramount, but the world of work has not changed dramatically. Embedded traditional office culture has inhibited the speed of adoption of virtual practice. Technology alone will not bring about change. Drawing from practical experience of operating a virtual company for over fifteen years, the author will project several reasons for the slow adoption of virtuality by management in general. Inhibitors to change are seen to be lack of trust, assessment of work and social organization. The author suggests ways in which the current economic and geographic climates respectively might speed the adoption of virtual practice in the workplace.
1 BACKGROUND Virtual environments hold both a fascination and a threat to present-day working life. Virtual working practice, and the cultural shift which many of the developments have assumed in current management understanding are still nebulous concepts, with strengths and drawbacks. Our understandings of virtuality differ so greatly that it is worth asking what we mean in the context of organizations and information systems. A virtual organization was very concisely defined by Lee Pang (2001) as: “a flexible network of independent entities linked by information technology to share skills, knowledge and access to others' expertise in non-traditional ways: a form of cooperation involving companies, institutions and/or individuals delivering a product or service on the basis of a common business understanding. The units participate in the collaboration and present themselves as a unified organization.” Pang makes the point that virtual organizations have always existed – traveling sales staff, outsourced staff and staff working at home. However, what is new is that technology has made it much easier to support distributed work teams. Barriers of distance and time have been overcome by technology. Larsen and McInerney (2002) predicted this, in their statement that “Forming virtual organizations (VOs) is a new workplace strategy that is also needed to prepare information, technology, and knowledge workers for functioning well in inter-organizational teams.” What has also been understood and predicted by early writers such as Charles Handy (1995) is that virtual working depends to a great extent on trust, and that organizations must therefore take this into account in the changing culture which virtuality will promote. Handy states that “Trust is the heart of
the matter. That seems obvious and trite, yet most of our organizations tend to be arranged on the assumption that people cannot be trusted or relied on, even in tiny matters”. This theme has been elaborated on since, by Jarvenpaa and others (Jarvenpaa et al, 1998; Jarvenpaa & Leidner, 1999), and picked up again more recently by Yoo and Alavi (Yoo & Alavi, 2004), in relation to leadership of virtual teams, and by Julsrud on the development of trust relations in distributed work groups (Julsrud, 2008). 79
2 THE ‘NEW’ TECHNOLOGY It was around the turn of the century, from 2000 onwards, that broadband became universally understood and talked about. The UK at that time had invested in ISDN, the technology of which is unimportant in this context, as the broadband message was sweeping in from the continent and has now virtually overtaken it. The difference which both of these technologies made to the information world, however, was in the fast transfer of huge quantities of data. It is difficult to imagine now what the world was like before large data transfer was possible. All of the developments in large-scale file transfer, graphics exchange and online video which we now take for granted were only possible with broadband technology. It is no exaggeration to claim that the advent of broadband was potentially as great a revolution in the world of work as was the industrial revolution. The effect of universal access to broadband was to revolutionize how we viewed technology and data transfer, and was in fact the birth of the truly virtual organization. Pang (2001) had already seen that organizations would no longer be constrained by traditional barriers of place and time. They would become more flexible – the adjectives he used in fact were ‘dynamic’ and ‘restless’. Virtual organizations would support dynamic changes to work environments and processing structures. Others working in the information area at the time saw that the new technologies could be applied to revitalizing communities and restoring family living and work-life balance by enabling the focus of work to travel out of the traditional workplace. Organizations would become flexible in communication patterns; dynamic in their attitude to willingly change products and services; independent of geographic dispersion of individuals. The three factors identified by Pang (2001) would lead toward higher levels of innovation and creativity. 2.1 Development of technologies The benefits of working in this new, flexible, virtual environment were seen to be enormous, both for employers and the employed. The standard picture of an organization based on these technologies would be of a core facility, with an intelligent network of location-independent workers, strongly linked to the core through email, online file transfer and video links. Office space, car parking space, commuter travel would all be reduced dramatically, so solving problems of inner city congestion, weather hazards, long-distance travel and road expansion programmes overnight. Travel would be used for social and leisure purposes. Inner city travel would be made easy for ‘essential’ workers. Everyone would benefit. Companies would see huge leaps in productivity through the saving of commuter travel time. Sickness absence, which currently apparently contributes several million days’ lost to UK industry per year, would diminish, for instance through non-spreading of infection or absence due to care of the elderly. International cooperation would no longer be dependent on air miles, so saving the planet by helping to reduce carbon emissions. So why are we not all doing it? An early assumption in the development of these technologies was that a virtual organization would inevitably require supervision, which would need to be built in to the distributed systems. The early BT experiments in the Highlands of Scotland were based not on trust, but on direct electronic supervision, leading to slow adoption of what in fact became very cumbersome and expensive desk-top solutions. Handy’s approach (Handy, 1995) was to point out that if one could not trust employees in the workplace, whether one could see them or not, there was something basically flawed in the organization. Virtual working simply highlighted the need for a concept which should already be part of any efficient or well-run organization. So while the technology developments enabled work to be carried out from many dispersed locations, the quality of the work and concepts of loyalty and trust which applied in the workplace would equally apply. Concentration on the technology, however, has tended to diminish the emphasis on trust which, as we will develop later, is not necessarily always fully understood. 2.2 Pace of change Many of us who have worked in the virtual world since these early ‘heady’ days ask ourselves – since there are so many benefits for individuals and for organizations, why hasn’t the world of work changed? Or why is it changing so slowly? Where is this new world?
80
The office culture is so embedded in many organizations that change has been much slower than many had predicted in the early days of the technology revolution. A high-level IT Conference in Seattle in 1990 – CHI’90 (Carrasco & Whiteside, 1990) – demonstrated virtuality in all its most exciting aspects – headsets and electronic gloves which allowed one to move around and manipulate virtual environments, playing virtual instruments, remote collaboration between computer screens. Yet visit most large organizations today and be greeted by the same structure of reception areas, individual desks and managers’ offices which, apart from a proliferation of computer terminals, would have looked no different from the same organization twenty years ago. One of the reasons, I contend, is that operating in a virtual environment requires a much greater emphasis on collective interaction, cooperation and trust. Body language is difficult to simulate electronically – ‘smileys’ are a very poor substitute for a casual meaningful gesture. Without understanding or addressing the need for increased and continuous interaction electronically, we lose basic communication. Individuals working in this virtual environment can become isolated or overlooked for professional advancement, because managers still work on the principle of needing to see people at work. Organizations will only become truly virtual when collaboration and trust are an integral part of their collective purpose, and when these are adequately addressed in virtual environments. Going back to CHI’90 – this conference was a jumping-off point for a whole new understanding of the world and how we live in it. Things which could be seen and touched could now be seen but not touched; things which could appear to be touched could react as though they had been; and a world of virtuality was opened up. We stood in space and rang bells, played music, without ‘touching’ an instrument; we reached out to grasp an object displayed in front of our eyes which ‘wasn’t there’. We played building games on computer screens with other unseen users, and for the first time realized that the ‘real’ world was not totally in our control any more. To a modern-day audience this is totally understandable and accepted practice, but to an audience used to manipulating objects by hand, this was quite alien and exciting. The advances challenged much of our conventional thinking. An organization accustomed to managing people in ordered office environments is equally challenged. Not only the environment has to change, but the management culture and assessment of what constitutes work. Many organizations have embraced the new technology, but have not changed management style or culture to accommodate it.
3 INHIBITORS 3.1 Trust It has been shown (Panteli & Duncan, 2004), through analysis of email trails from a large virtual work project involving several teams internationally over a period of time, that trust is a large element in virtual working. At one level, however, it is astonishing how much misuse of computer space and time is possible in a supposedly supervised office environment, and how little difference there is between the misuse of computer time in an office or elsewhere. In other words, in computer terms, trust is now as essential in any environment, since supervision of computer working is not just a matter of looking across the office to see if someone is at their desk. Physical supervision, however, seems still to be very much engrained as part of managerial control. We still hear, when introducing the idea of virtual working to companies, comments like ‘in a virtual environment, how will I know what employees are doing – whether they are off playing golf or looking after the children when they are supposed to be working?’. Assessment of work by appearance is unfortunately still part of management culture. The International Journal of Networking and Virtual Organizations (IJNVO) produced a Special Issue in 2008 on ‘Trust and Virtual Teams’. Two papers in particular in this issue (Julsrud, 2008; Mumbi & McGill, 2008) explore ways in which trust can be developed and maintained, and recommend strategies for economic and project management success as a result. 3.2 Output
81
A second major inhibitor to the adoption of virtuality is the practice of assessing work done by the number of hours spent, or number of hours in the office. The valuation of work as product rather than process must change when managing in a virtual environment. The adage of ‘work is what we do, not where we go’ is hard to change in management culture. I compare the concept of ‘going to work’ with ‘going to the gym’ – it’s not the going there that counts! Yet management still believe that in order to implement homeworking, for instance, which is an important form of virtual working, they must first implement some system of registering the number of hours worked. Management culture is so attuned to assessing numbers of hours that this change may require large organizational rethinking. It permeates through finance, contract negotiation, payment and company image. On the other hand, as Handy suggests (1995), “few are going to be eager advocates of virtuality when it really means that work is what you do, not where you go”. It is perhaps not only managers, but employees who have become used to using ‘work time’ for personal activities. This doesn’t need to change, as serendipity and social interaction are often productive elements in work, but it does need to be understood. 3.3 Socialization One other major inhibitor which we encounter repeatedly is the belief in the ‘buzz’ of the office as an important factor in encouraging creativity. Serendipity – the overhearing of a chance remark applied in a different context – is claimed as a vital part of office culture. In practice, the technology revolution has resulted, ironically, in many offices becoming simply collections of computer islands, each person as lost in their own computer space as if they were working independently. The so-called ‘buzz’ frequently occurs in social areas, which could just as easily be online conversations with colleagues or social conversations over lunch. People working remotely create just as many social interactions with those around them, or with colleagues internationally, as they would meeting the same colleagues in a regular office environment. In practice, it is found that working remotely, an individual will increase socializing effort, just as they value and practise the increased emphasis on communication.
4 VIRTUAL WORKING IN PRACTICE The current economic and geographic climates respectively have demonstrated that there has never been a clearer or more pressing need for virtual organizations to succeed. Online banking, online education and online shopping are all realities. Even today, I heard it postulated that ‘real’ money will disappear entirely, being replaced by virtual transactions through mobile phone or electronic card. The implications for that kind of development are enormous, and one can already see trends in that direction, in parking meters, rail and public transport ‘Oyster’ cards, automatic supermarket telling machines and so on, so that actual cash could become irrelevant. The implications for the workplace are that individuals and their home environments will become much more attuned to electronic living, and by implication, electronic working, and that therefore the workplace will also change. People who experience virtuality in their everyday living will have the expectation of it also in the workplace. The potential values will be in reducing office space, reducing car journeys and cutting commuting travel times. Our company has operated since 1990 in a wholly virtual environment – that is, there is no Head Office, there are no ranks of desks or office managers, there is no concept of ‘travelling to work’. We have operated large-scale virtual projects involving multi-national client and worker combinations, using non-traditional means of ‘employment’ from remote interviewing to organized teamworking, quality control and retraining. Through examining audit trails of email correspondence over a period of intense activity, academic colleagues have been able to determine the management techniques involved and the strengths and drawbacks of working in a virtual environment. (Duncan & Panteli, 2001). A deep understanding of virtual working in different environments has developed from the varied work undertaken over a period of fifteen years, and various observations have been made which we hope will influence future organizational developments. Key among these is undoubtedly the fact that unless virtuality is addressed by all levels of an organization, it is unlikely to succeed as a business
82
process. There is still an expectation in many minds that large offices, peopled with individuals visibly working at desks, create an impression of activity and importance. One huge benefit which the current economic downturn may bring about is a change in this attitude, and a subsequent reassessment of what a business is actually about, rather than what it looks like. As the economic situation is also global, we can expect the cultural change to be international.
5 THE FUTURE Drawing from real virtual projects it has been possible to project what future technological advances are likely to imply for management. Undoubtedly in ten years’ time this prediction will look very different, but from where we stand now various trends are visible. Sadly, much of the potential of a truly virtual physical environment is still impractical. It has been amply demonstrated that moving around in virtual space, tripping over objects by watching a screen which only you can see; preventing others interfering with the space between your eyes and the ‘screen’, were all a bit too reminiscent of the ‘deaf telephone’ dilemma demonstrated by TV comedians of the eighties - technology out of phase with reality. Sensors which can detect when we enter our offices and cue software that alerts colleagues that we're available to talk – and when we wish not to be visible – may be available in some environments, but are a direct example of the cultural change which we need to make before accepting that others can invade our private space. Even more impractical is the science fiction image of ‘being’ in one place and ‘existing’ in another – a suggestion which I am convinced is already possible in a practical sense, but which has so many human and personal security issues associated with it that it will be a very long time before it is implemented. The advances which will be made, however, will be that offices as they appear at present will become redundant. Core company spaces will exist for practical purposes, but even traditional office functions such as photocopying will disappear, being replaced entirely by electronic copying – many office functions are concerned with running the office rather than with carrying out the core business activity. ‘My desk’ will disappear as a concept, being replaced by ‘My computer Space’, and whether that space is in a collective office space, or a home, or an airport, or in a local community centre will be unimportant. The social aspect of the work environment will be positively encouraged, and will be enhanced by virtual meetings through life-size shared computer screens and spaces into which individuals can be invited – or not. Managers will communicate with employees through managed email and video links. Employees will interact directly with central file systems from wherever they happen to be working. Management meetings will be electronic, through video link – physical meetings will be social occasions. Trust is the most interesting underlying element in many of the new developments, and has been shown to be a combination of respect engendered by quality of work, with a strong physical feeling of belonging. Emphasis will be placed on frequent social gatherings, but the assessment of work will become not how many hours are spent, but what is produced. (Panteli & Duncan, 2004). Endorsement of these predictions is to be found in the work of JB Associates (JBA), who use the term ‘smartworking’ to describe how organizations will change. JBA is a professional services firm, collaborating with leading universities such as Carnegie Mellon, Cornell, Durham, NYU, Surrey, and Henley Management College to produce unique workplace financial models, by considering the interdependence of people, space, and technology - a discipline referred to as “workplace effectiveness”. Their latest paper (Blackwell, 2008 - in press) describes some of the required cultural and trust transitions which will bring it about.
83
6 CONCLUSION Advances in technology have made it possible to work seamlessly from diverse locations, creating virtual work spaces in place of traditional office centres. Management of the virtual workplace will not be based on command and control management styles of the past. To succeed as a leader in the new environment, a manager will require new skills, and will be dependent on the development of swift trust between team members. Technology is developing more rapidly than the culture to deal with it. This is a problem for management, and an explanation of the apparently slow adoption of virtual practices in the workplace. The practical drivers which will speed the adoption of working in a virtual environment will be the global economic and environmental conditions which we are experiencing in this generation. Companies will be driven by: •
Increased cost of inner city office and car parking provision
•
Contribution of commuter travel and inner city congestion to global warming
•
Employee expectations of flexible work patterns to accommodate working in a changing environment
The cultural changes required will be: •
Increased openness and trust in collaborative working
•
enlightened attitudes to work and the value of work
It is questionable whether this generation of managers will achieve the shift necessary to bring this about, but it is becoming obvious even in traditional practical occupations, that many advances in virtuality require not just more universally-available and sophisticated technology, but a change in cultural attitudes to work and the workplace, before the technology can be fully utilized. References: Blackwell, J. (2008 – in press). Trust-based workplaces. In: Smartworking: a definitive report on today’s smarter ways of working. (www.jbassociates.uk.com) Carrasco, J. & Whiteside, J. (Editors) (1990). Proceedings of the ACM CHI‘90 Human Factors in Computing Systems Conference, Seattle, Washington, USA. ACM Press Duncan, E. & Panteli, N. (2001). Virtual team working: a design perspective. People in Control: an International Conference on Human Interfaces in Control Rooms, Cockpits and Command Centres. UMIST, Manchester, UK, 18–21 June. pp 115–119. Handy, C. (1995). Trust and the Virtual Organization, Harvard Business Review, May-June, 40–50 Jarvenpaa, S. L., Knoll, K. & Leidner, D.E. (1998). Is Anybody Out There? Antecedents of Trust in Global Virtual Teams, Journal of Management Information Systems, 14, 4, 29–64 Jarvenpaa, S. L. & Leidner, D. E. (1999). Communication and trust in global virtual teams. Organization Science, 10(6), 791–815. Julsrud, T. E. (2008). Flows, bridges and brokers: exploring the development of trust relations in a distributed work group. International Journal of Networking and Virtual Organisations (IJNVO), 5 (1), 83–102.
84
Larsen, K.T.R. & McInerney, C.R. (2002). Preparing to work in the virtual organization. Information and Management, 39, 445–456. Mumbi, C. & McGill, T. (2008). An investigation of the role of trust in virtual project management success. International Journal of Networking and Virtual Organisations (IJNVO), 5 (1), 64–82.
Pang, L. (2001). Understanding virtual organizations. Information Systems Control Journal, 6 (1). Panteli, N. & Duncan, E. (2004). Trust and temporary virtual teams: alternative explanations and dramaturgical relationships. Information Technology and People, 17 (4) 423–441. Yoo, Y. & Alavi, M. (2004). Emergent leadership in virtual teams: what do emergent leaders do? Information and Organization, 14, 27–58.
85
“CLUSTERS OF COMPETENCE” IN THE DISTRIBUTED DEVELOPMENT OF GRIDS: A PARTICLE PHYSICS COMMUNITY’S RESPONSE TO THE DIFFICULTIES OF GLOBAL SYSTEMS DEVELOPMENT. Avgousta Kyriakidou and Will Venters London School of Economics and Political Sciences Information Systems and Innovation Group, Department of Management Abstract This paper examines the distributed development of Grid infrastructure in the particle physics setting. Specifically, the focus of concern is the collaborative practices employed by particle physicists in their attempt to develop a usable Grid with the aim to offer lessons to those involved in globally distributed systems development.
INTRODUCTION Grid computing promises to distribute and share computing resources “on tap” and provide transparent communication and collaboration between virtual groups (Foster and Kesselman 2003). Yet developing and implementing such complex information infrastructures requires collaboration among a range of dispersed groups, and flexibility and adaptability to volatile requirements (Berman, Geoffrey et al. 2003). Here, we examine a case-study of Grid development within particle physics, the LCG (Large hadron collider (LHC) Computing Grid), in an attempt to explore how such a large-scale distributed system is developed collaboratively in a global way in readiness for data from experiments at the recently launched LHC at CERN, Geneva. This year the LHC particle accelerator, will begin to collide protons in a search for the “Higgs boson” particle (Doyle 2005) and so produce 12-15 Petabytes of data annually. The storage and analysis of the data requires a Grid of 10 Petabytes and of 100000 CPUs. Traditionally such resources would be centralized at one location, however, in the case of the LCG, various external pressures (such as funding) mean a novel globally distributed Grid is required. Grids are a new form of large-scale systems that are highly distributed, both in conception/construction and operation (Foster and Kesselman 2003). Developing a Grid is argued to be a significant systems development challenge because it must be done as a distributed collaborative effort (Berman, Geoffrey et al. 2003). The systems development setting studied here is the particle physics community which is well-known for the development of other cutting edge distributed systems to support their work (most notably the web) and is itself highly distributed, so presenting a context where distinctive collaborative practices emerge. Exploring this case we argue that the development of Grids poses new and underexplored opportunities for understanding collaborative global systems development. We particularly examine the collaborative practices of the particle physics community as it develops its Grid with the aim to offer answers into the wider context of distributed systems development. In studying these, we consider practices as an emergent property linked to improvisation, bricolage and dynamic competences which unfolds as large-scale projects evolve. The work begins from the assumption that systems development beyond the smallest scale is inherently a collaborative activity (Whitehead 2007). Yet, with the globalization of the development process (Sengupta, Chandra et al. 2006) and new forms of open technologies such as Grids, the limitations of traditional systems development practices (and even contemporary Agile practices) have become obvious (Parnas 2006; Fitzgerald 2000). Global systems development (GSD) therefore demands new, different development practices, since the nature of the problem and the environment are different (Herbsleb and Moitra 2001). Russo and Stolterman (2000) and Fitzgerald (2000) argue that new systems development practices, or guidelines for successful systems development, should be drawn from “best practice” development situations that prevail today. They further argue, researchers should focus on examining systems development practice in real-world situations, and in real-world development projects. This paper rejects the idea of “best-practice”, instead exploring a case which is
86
interesting and unusual in the way they develop systems and therefore provides a juxtaposition to more orthodox accounts with lessons emerging from reflection on both. Our theoretical framework is drawn from activity theory, and frames the LCG project as a complex activity system influenced by the context, the community’s rules, norms, culture, history, pastexperiences, shared-visions and collaborative practices (Nardi 1996). We understand the Grid’s development as a series of contradictions between the elements of this activity system, which are in a continuous process of getting resolved in order for the activity system to achieve stability and balance. Contradictions are considered to be the major source of dynamism and development in activity theory (Bertelsen 2003), since based on the emerging problems and conflicts, people have to re-considered their position and collectively re-construct their shared understanding, knowledge and practices. In addition to empirical evidence from field-work with LCG members and at CERN, the research draws upon literature from fields such as global systems development, open-source development, and global-outsourcing to provide concrete practical recommendations for those considering the collaborative development of large-scale systems, and provide suggestions for how such collaboration might be aided. The following section reviews these literatures. Section 3 and 4 present the theoretical framework and methodology. The case study then follows, after which analysis is presented. Finally, tentative conclusions for the information systems development and Grid communities are provided.
LITERATURE REVIEW With the current trend of globalization and the problems of turbulent business environments (Herbsleb, Paulish et al. 2005), the IT industry has turned toward globally distributed software development in an attempt for the silver bullet of high-quality software delivered cheaply and quickly (Agerfalk and Fitzgerald 2006). Ongoing innovations in information and communication technologies (ICTs) have made it possible to cooperate in a distributed fashion. From originally quite small colocated projects, enabled by technological advances, companies now embark on major complex software development projects running in geographically distributed environments (Oshri, Kotlarsky et al. 2008) and GSD is therefore “becoming a norm in the software industry” (Damian and Moitra 2006). GSD is claimed to be one of the megatrends shaping the industry today (Simons 2006). However, it presents a special challenge not because it introduces new ways for software to fail but because it drastically complicates communication, coordination and control and changes the nature of the development environment (ibid). The changing nature of the environment and the “faster metabolism” of business today, require organizations to act more effectively in shorter time-frames and to develop software at internet speed (Ramesh, Cao et al 2006). Furthermore, the access to a larger pool of expertise, often in low-cost geographical locations (Kotlarsky and Van Fenema 2008), compels companies to form virtual alliances with other organizations in order to survive. The distribution of companies has become a necessity not only because of the need of shifting labour markets, but also in pursuit of talented people regardless of location (Ye 2005). Particular attention is hence being given to the opportunities and difficulties associated with sharing knowledge and transferring “best practices” within and across organizations (Orlikowski 2002). Knowledge-based collaboration within systems development has therefore become important (ibid). Similarly the global outsourcing literature suggests that distributed collaborative practices in systems development are important and timely (Lacity and Willcocks 2001; Yalaho 2006). Over the years the practices involved in GSD have undergone refinement and new more effective practices that focus on the collaborative factor have emerged such as agile practices/agility (Nerur, Mahapatra et al. 2005). Unlike traditional development practices, agile approaches deal with unpredictability by relying on people and their creativity rather than on processes (Cockburn and Highsmith 2001) and are adaptable to project specific circumstances, such as the experience of the team, customers’ demands etc. (Bajec, Krisper et al. 2004). They are characterized by short iterative cycles of development driven collaborative decision making, extensive communication, rapid feedback, parallel development and release orientation, features which show their ability to respond to change and create innovation (Highsmith 2003). GSD is considered to be the new paradigm in developing large-scale systems (Damian and Moitra 2006). However, there are still challenges involved in managing the development, such as communication issues and technical issues that need to be addressed (Herbsleb and Mockus 2003).
87
Globalization increases the complexity and uncertainty of collaborative development effort, which can in-turn negatively influence project outcomes (Lee, Delone et al. 2006). While literature stresses for practices and processes that are flexible and adaptable to the increasingly volatile requirements of the business environment (Highsmith and Cockburn 2001), Lee, Delone et al. (2006) argue that successful GSD requires not only flexibility but also rigor in order to cope with complex challenges and requirements of global projects. There is an ongoing debate which rejects the idea of agile practices as a silver bullet to the challenges of GSD (Parnas 2006). Although, agile practices can work perfectly well in small, self-organized co-located teams (Boehm and Turner 2004), there is an urgent need for scaling agility to incorporate distributed and collaborative systems development situations (Zheng, Venters et al. 2007). GSD is argued to be “a discipline that has grown considerably richer through practice, influencing research and established practices themselves” (Damian and Moitra 2006). However, the practices and methods employed are far from mature and fully understood (Herbsleb and Moitra 2001). With GSD the limitations of traditional systems development practices become even more obvious (Hanseth and Monteiro 1998). Furthermore, there is a fundamental shift from the development of traditional IS to the development of global information infrastructures, such as the Grid (EuropeanCommission 2006). Grid infrastructures should be seen and treated as large-scale and open as they demand collaborative development in a global/distributed environment (ibid), an environment characterized by high uncertainty and complexity and a continuous stream of improvisation, bricolage, drifting, mutual negotiation, regularity, progress and cycles of interactions (Nandhakumar and Avison 1999). Development often requires ad-hoc problem solving skills and creativity, skills which cannot easily be preplanned (Ciborra 2002). GSD for large-scale systems demands new, different systems development practices as the nature of the problem is now different. Long-cherished computer science principles and early systems development are therefore re-examined in the light of the new requirements.
THEORETICAL FRAMEWORK In contrast to the deterministic views inherent in much of the literature on Grids we employ Activity theory (AT) as an approach to help us look at how technology is collaboratively constructed to fulfil the objectives of a global community (Nardi 1996). AT has inspired a number of theoretical reflections on what information systems development (ISD) is about (Kuutti 1991; Bertelsen 2000). It is argued that AT can provide a theoretically founded but detailed and practicable procedure for studying ISD as a real-life collaborative work activity in context (Korpela, Mursu et al. 2002) rather than as an individualistic process of interaction devoid of such social context. AT provides a well developed framework for analyzing the complex dynamics of collaborative settings which typically involve interacting human and technical elements (Crawford and Hasan 2006). The concept of collectiveness and the notion of different actors sharing the same goals and constructing the same meanings are at the core of this theory (Leontiev 1978), and are vital to our analysis of an “exceptional” community. AT’s focus on accumulating factors that affect the subjective interpretations, the purpose, and sense making of individual and group actions and operations, also provides a useful paradigm for the ways in which human experience, needs, collaborative practice and creativity shape the development and effectiveness of emerging technologies (Crawford and Hasan 2006) and therefore make this theory suitable for this study. Vygotsky (1978) originally introduced the idea that human beings’ interactions with their environment are not direct but rather they are mediated through the use of tools. Inspired by this concept, Engestrom (1987) extended Vygotsky’s original framework to incorporate Leontev’s social and cultural aspects of human activity, which reflect the collaborative nature of human activity (figure 1).
88
Figure 1. The fundamental unit of analysis in AT is the entire human activity in the collective context (Guy 2003) The activity itself is the context and is undertaken by human agents (subject) who are motivated toward the solution to a problem (object) and mediated by tools (artifacts, ISD methodologies, practices, cultural means etc.) in collaboration with others (community). An activity is therefore social within a community and influenced by the community. The structure of the activity is constrained by cultural factors including conventions and norms (rules) and social strata (division of labor) within the context (Mwanza 2001). Activity systems are driven by communal motives that are difficult to articulate for individual participants (ibid). They are in constant movement and internally contradictory. “Their systemic contradictions, manifested in disturbances and mundane innovations, offer possibilities for expansive developmental transformations” (Engestrom 2000). Such transformations proceed through stepwise cycles of expansive learning which begin with actions of questioning the existing standard practice, then proceed to actions of analyzing its contradictions and modelling a vision for its zone of proximal development and then to actions of examining and implementing the new model in practice (ibid). AT with the concept of contradictions, provides a conceptualization of collaborative breakdowns and tensions, which are viewed as highly important in understanding collaborative systems development (De Souza and Redmiles 2003). Contradictions reveal themselves as breakdowns, disturbances, problems, tensions or misfits between elements of an activity or between activities (De Souza and Redmiles 2003) and are considered to be the major source of dynamism, development and learning in AT (Bertelsen 2003) .
RESEARCH METHODOLOGY RESEARCH CONTEXTS Grid technology is claimed to be a fundamental step towards the realization of a common serviceoriented infrastructure for on-demand, distributed, collaborative computing, based on open standards and open software (Foster, Kesselman et al. 2003). It aims to provide a transparent, seamless and dynamic delivery of computing and data resources when needed, similar to the electricity power Grid (Chetty and Buyya 2002; Smarr 2004), and in this way enable the sharing of computer processing power, storage space and information on a global scale (Berman, Geoffrey et al. 2003). Carr (2005) brashly suggests that the shift to Grid computing forms of technology will “overturn strategic and operating assumptions, alter industrial economics, upset markets and pose daunting challenges for every user and vendor”. Similarly Berman et al (2003) suggest that Grid infrastructure “will provide the electronic foundation for a global society in business, government, research, science and entertainment”. While these are obviously extremely bold predictions, and term Grid remains illdefined, Grids remain an important step towards global IT infrastructures. A Grid is just a large number of distributed processors and other computing devises linked through networks, and presented to the user as a single computer and without the need to address individual resources directly (unlike the web in which URLs address particular web-serving machines). Among many international Grid projects worldwide, particle physics stands out, because of their exceptional distributed collaboration (Chompalov, Genuth et al. 2002), their significant contribution to Grid’s development and the fit of their style of analysis to Grid’s capabilities. Being the first scientific community to be involved in the
89
development of such a large-scale infrastructure, their contribution can be influential in informing the way other communities develop, adopt and conceptualize large-scale systems in general and the Grid in particular. RESEARCH DESIGN LCG’s collaboration uniqueness, prevents comparative studies, but provides a revelatory case of distributed systems development practice. An interpretative case study is thus used to gain in-depth understanding of the dynamic, complex, loosely-coupled particle physicists’ systems development activity and the collaborative construction of their shared practices. Research evidence was collected thorugh over 70 semi-structured interviews with key members of LCG as well as observation of major meetings/workshops and three week-long trips to CERN. Reviewing of LCG’s documentation was also carried out. Interviews were audio-recorded, transcribed and coded with Atlas.ti, though this was used in a flexible means as a device to aid interpretation rather than for detailed coding.
CASE STUDY: THE LCG PROJECT On September 10th 2008 the Large Hadron Collider at CERN in Geneva began particle acceleration (though quickly stopped due to magnet failures). When fully operational this accelerator will produce six hundred million interactions per second and, after considerable filtering, 15 million Gigabytes of data annually, requiring a unique amount of processing power (100,000 CPUs) and storage space to allow the thousands of physicists globally to analyze them (Lloyd 2006). The proposed solution to this and other computational-intensive and data-centric problems is the Grid (Colling 2002; Lloyd 2006). TheLCG project’s mission is to build and maintain this Grid infrastructure for the entire high energy physics community that will use the LHC (ibid). Building the LCG is a highly distributed, complex and poorly defined systems development task. Cutting edge technology and tools are used, new standards reflecting security issues etc. are being negotiated and middleware (which is like the operating system of the Grid) together with other supporting software are being developed collaboratively by physicists around the world. Particle physicists have a long tradition of such largescale global collaborations and working on a distributed basis is just a part of their everyday routine (Knorr-Cetina 1999). Indeed, building this large-scale, by nature distributed Grid, demands global development. Firstly, funding needs to be taken by different sources and secondly an enormous amount of manpower is needed for the different Grid elements to be developed. These dictate that Grid elements be globally distributed rather than collocated at CERN. The systems development activity of LCG is organized into a number of projects, some of which extend beyond the physics community. The fact that funding is so difficult to get and it is politics rather than technology which may inhibit the success of such Grid initiatives (Kyriakidou and Venters 2007), means that other people beyond physics need to be involved, in order to ensure transferability and usability in other disciplines. Other operational Grid organizations providing resources for the LCG are the EGEE (Enabling Grids for e-Science) which is jointly producing middleware with LCG, the OSG (Open science Grid), the GridPP (UK’s contribution to LCG) etc. Participants in this largescale collaboration include by and large particle physicists, however a number of computer scientists, software engineers and people from other advanced sciences are also active members in the development, deployment and user support. Particle physicists’ collaborative work practices are not typical (Knorr-Cetina 1999; Shrum, Genuth et al. 2007) and have been described by Chompalov, Genuth et al.(2002) as “exceptional”. LCG’s constitution reflects these work practices and is thus based on a collaboration where decisions are made based on democratic and consensual basis with minimal levels of internal authority (Traweek 1988; Shrum, Genuth et al. 2007). Particle physicists are highly interdependent upon each other’s work for the generation of scientific results, which creates a trustful environment in which management is minimised. The management structure of the LCG is flat and it is best described as a network rather than a hierarchy. The systems development activities undertaken by the LCG project varies. They include the development of middleware components, installation and maintenance of Grid hardware, development of physics applications for job submission to run on top of the Grid middleware, testing and certification of applications, ensuring patches have been installed and user support. Despite having recruited some computer scientists for the project, common practices (emerging from particle physics) are that software code is written on an ad hoc basis so as to make things work, rather than following
90
any standard system development practices. Furthermore, because the Grid is a new and emerging technology, LCG cannot take a plan-based approach to development and that’s why they are pragmatic with the aim to improve by trial-and-error. The Grid is already partially in use, and thus some physicists are already users who write software to undertake their analysis. Particle physicists are powerful users, since they have computing expertise and therefore tensions exist between them and the specialist developers. Most meetings for coordinating and dealing with issues around the development are conducted virtually. Video-conferences between groups of people responsible for each development task, or exchanging of emails are two standard ways of working. Knowledge is located and socialized through these shared resources as well as through key individuals, who are considered experts and carry out such knowledge and expertise by attending different meetings and by constantly changing job posts.
ANALYSIS The scope of this paper is not to undertake a whole activity theory analysis and therefore provide in depth interpretations of the activity’s structure (activity-actions-operations). Rather, we aim to create a rich understanding of the LCG development activity and how this is influenced by the distributed context, the community’s rules, norms and collaborative practices. We do this by constructing the developers’ activity system and so identifying some of the frictions and tensions through the AT concept of contradictions. LCG project’s activity systems Identifying an activity system (according to Engestrom’s model) requires the identification of the following components: activity of interest, object, subjects, tools, rules, division of labour and community. Findings so far indicate that the LCG project involves two sub-activity systems which we term the developers’ activity system and the users’ activity system. The focus of this paper is on the developers’ activity system.
Activity of interest Object Desired outcome
Subject Tools Community
Rules Division of labour
Developers
Users
Distributed collaborative systems development Development of the Grid infrastructure for supporting the LHC The LCG enabling analysis such that the LHC can achieve breakthroughs in physics Particle physicists, computer scientists/software engineers Systems development practices and methodologies, programming languages Collaboration, high degree of competence, shared goals, trust, pragmatism etc. Collaborative way of working, deliver on time within the budget etc. Fuzzy limit, everyone is doing what does best. People can have more than one jobs. Limited lines of authority. There is no leader or manager, there are spokespersons
Using the Grid for physics analysis Run analysis software on LHC data using the LCG Grid Successful analysis of their data, do physics Mainly particle physicists Grid infrastructure Collaboration, high degree of competence, shared goals, trust, pragmatism etc. Collaborative way of working, finish their PhDs or post-doctoral research. Fuzzy limit, everyone is doing what does best. People can have more than one jobs. Limited lines of authority. There is no leader or manager, there are spokespersons
Table 1. Developers’ activity system Particle physicists are waiting for the LHC (desired outcome) to begin to fully operate where upon it will produce large volumes of data for analysis. The objective behind the LCG’s systems development activity was realized when particle physicists understood that in order for the LHC to be successful they needed a Grid to undertake analysis (object). This realization made particle physicists form global distributed virtual alliances with a number of actors such as funding bodies, universities and the
91
industry, since the development of such a large-scale global infrastructure needed to be done collaboratively as a community effort. As one interviewee stated: “What physicists want to do cannot be done by a small group, it needs a large collaboration”. The people involved in the development of this global infrastructure come from different universities and institutes around the globe (including CERN) and involve particle physicists with high technical computing skills, as well as traditional computer scientists/software engineers (subjects). Their activity is driven by the imperative to analyze data from the LHC. As Grid technology is new and different they argue that they cannot take a plan-based approach to systems development. Their aim is to learn and move forward by trial and error. Furthermore, the complexity, pressures and scale of the project mean that no-one can have a full and clear overview of the system and therefore requirements are difficult to pre-specified in detail, the architectures are developed based on assumptions and even the one-centrally designed piece of technology, the EGEE middleware, is modularized and released gradually. The response of LCG to this is to pragmatically and creatively react, drawing on the down-to earth and creative approaches (tools) embedded in the particle physics tradition and history. Particle physicists have themselves recognized their computing practices as amethodical, highly pragmatic and improvisational. The lack of formal processes in systems development is openly acknowledged, with most believing their existing work practices are effective given their primary purpose of building a working system in an extremely limited time-scale: “[physicists] are more pragmatic in computing”“When particle physicists do things, they do them just to resolve a problem for now”. Physicists follow a bottom up and reactionary approach to development. However, such a distributed development environment requires flexibility and adaptability to changing requirements and external pressures and for this reason they make use of other more flexible practices. Developers have short-term goals and therefore have short cycles of iteration with continuous releases. Releases are usually tested and certified before they move into the pre-production Grid, where further robustness tests and feedback are gained. Developers also use prototypes for feedback, to improve functionalities and to gather requirements. Experience plays a crucial role as most of the developers are the physicists who themselves know what needs to be done. Concerning traditional systems development methodologies, the usage is little: As an interviewee claimed: “I would say that we use methodologies but just up to the limit that it is appropriate for what we are trying to do…You will have regression testing and all this sort of stuff in software development and there is a whole life cycle of requirements and specification gathering…but also things change, [so] how can you possibly do a formal software engineering approach to something if we are going to change it ? So software engineering is used up to a point but certainly not completely religiously”. A distinct feature of identifying and exploiting technical solutions in the project is the reliance on natural selection. Within particle physics the use of competing technological solutions is a traditional way of working – often as people simply try to solve their problems without consulting others (Pickering 1995). Physicists " are powerful users, they do what they want. It’s in their culture, it’s their mentality. If they need something, they will just going to get it or they will do it by themselves, although it might already exists”. Once these hacked solutions exist natural selection selects those to be carried forward, e.g. because of technical failures, lack of funding etc., rather than politics or social power defines the one to be followed: “The cream comes to the top. Things that work win out and that’s how we worked it”. Particle physicists see themselves (and are seen by many) as the “elite among the sciences and as the elite among the physics” (Traweek 1988). Traweek (1988) described them as “promethean heroes of the search of the truth” and outlines the inherently collaborative nature of their community, with Knorr-Cetina (1999) similarly describing them as communitarian (community). Collaborative working has traces back in their history, their culture and the nature of their experiments seen as collaborations: “Particle physicists have collaborated anyway...we run big collaborations across every nation on earth and they always work”. Their experiments are collaborations requiring collaboration between large numbers of people, usually globally distributed and dissembled when the experiment finishes. For particle physicists, the development of the Grid is like all other previous collaborations they have run, hence like all other experiment collaborations. LCG, to a certain extent, is also set up in the model of experiment: “The original proposal was set up deliberately to make it look like an experiment. This whole idea of collaboration board for instance, comes out of the idea of what an experiment does”. People take part in this global distributed collaboration and openly share
92
their knowledge because they like feeling that they have contributed to the collective cause. They are driven by the shared goals and “sacred causes”. As one interviewee put it “We are trying to get to the data that comes out of the LHC and there are times which we know that we will compromise our own parochial little gains to reach that higher goal. This high level common goal makes it actually easy for us to do this thing [to collaborate, to work together]”. There is a great belief in the ability of particle physicist to overcome technical obstacles in order to get their experiments to work. People almost always state with certainty that the LCG will work because they are extremely clever and will make it work. As one interviewee argued: “Within high energy physics, I think it’s true to say that all people you deal with are highly intelligent and you don’t get people working at this level who are not above average intelligence, given the general population”. A more significant source of confidence (one might argue arrogance) resides in the belief in the individual skills, competence, creativity and in the context of collaboration. This high degree of competence, the shared goals and internal motivation as well as the collaborative working create a trustworthy environment which drives the community. Trust is considered to be a central element of the collaboration binding them together and, as argued by the physicists, it is crucial in making their collaboration successful as it removes a lot of arguments and discussions in decision-making. Porra (1999) argues that the culture of collaboration or individualism within a colony is passed on from generation to generation as customs, norms, values, stories and behaviour - its rules of conduct. For particle physicists the collaborative way of working (rules) is inherent within individual physicists and this is the only, if we could say, “real rule” that guides them. Being so distributed it is crucial to build a strong sense of community and construct an identity for those involved in Grid’s development in order for the project to function collectively. Going to the pub or going together for lunch when colocated, for example, are one important aspect of this. There are no rules as one might find in many industries. Formal use of Gantt charts and fixed schedules do not accord with particle physicists’ work. Although, Gantt charts have to be produced in preparation for applying for funding, these serve as a minimal organizing structure for the project. This however, does not mean that they do not believe in planning. To cope with difficulties in this virtual environment, the project has to be flexible and quickly adapt to changes. Even though there is not a clear fixed detailed plan, there is however the plan to carry the project forward by improvising highly pragmatic and practical solutions. This way of working has caused tensions with the computer scientists working in the project who aspire a more plan-based, methodological approach to development: “[Software engineers] want to design things, they want the project to be very well defined, but (…) by definition physicists normally don’t know what they want. There’s a slight difference in attitude”. Another important characteristic of this community is the freedom they have in their work. However, these seemingly spontaneous practices at the individual level are balanced and directed towards the shared object by a level of reflexivity; maintained by continuous and extensive communication between the project’s members. Communication and socialization are seen as crucial for the project to be successful and can take various forms such as meetings of different boards, committees and working groups, virtual meetings, mailing lists, informal face to face meetings on the corridor, semistructured face-to-face meetings, wikis, blogs. Physicists value face to face meetings and that is why they try to meet with every opportunity: “Being so dispersed as a development group, we try to have frequent face to face meetings(…)Having the possibility to have technical discussions all together is also good. But having the possibility to have unstructured free time where people can talk to each other is also very, very important. Helps in building the group”. Such communication is also required for skills development since physicists’ argue that they mostly acquire their skills through word of mouth. Another important form of communication which helps for standardization of the working practices is rotation of expertise. There is no clear division of labour within the collaboration and individuals shift between jobs and have more than one job at the same time. People volunteer to do things and shift between jobs and not because they are forced by someone, but because they want to (though political forces can obviously play a part). Therefore, there is flexibility in roles since people tend to do what they feel they are best at: “We don’t regard our roles as having fixed boundaries…We tend to do things which we are better at regardless of whose role it might actually be.”. Furthermore, they would argue there is no strict hierarchy within the collaboration since (as argued by the interviewees), what they argue they have is
93
a collaboration with a spokesperson and volunteers rather than a company with managing director and board. Decision-making is based on discussions were everyone can share their views leading towards common consensus with fights and conflict very rare. Decision-making does not appear to stem from social arbitrariness or political power. This is not to say that politics does not exist, but that they are dispersed, sidelined and the influence of powerful actors is dissipated. There is not a leader that directs what people are doing; rather people have freedom to improvise, to use different techniques and have space for creativity and innovation: As one interviewee who worked closely with Tim Berners-Lee put it: “Why was the web-invented here? Because Tim had the freedom from this hierarchy to spend a bit of time investigating something which was of interest to him and nobody else here [thought] – oh it’s a waste of time, never mind. He was working on remote procedure calls. And out of it popped the Web…One guy, sitting in his office, who had a dream.”
STRATEGIES FOR SUCCESFUL DISTRIBUTED DEVELOPMENT Particle physicists acknowledge that the distributed development of the LCG has been a challenging learning journey. Drawing on our Activity Theory framework we observe a number of inner contradictions emerging and being faced throughout the development process. These include i) tensions between particle physicists and computer scientists within the developers group, ii) tensions between developers and users, iii) contradictions between traditional technical models to development and what was actually going on, iv) problems with the extreme size of experiments and the requirements of data analysis compared to previous experiments infrastructures of communication and data storage. Their means of resolving these contradictions provide evidence of their “expansive learning” (in activity theory terms). Such learning drives the progress of the development, and not only gave rise to technological innovations (such as the Grid middleware) and development tools to support distributed collaboration but also to new work practices which have enabled developers to better face the demands of such large-scale distributed development. It is in these practices that we find lessons for those engaged in the distribution of systems development. For this reason we now consider these emergent practices in detail, focusing on the problems faced and how these were resolved. CLUSTERS OF COMPETENCE. Particle physicists are highly influenced by their traditions and past experiences and their practices are rooted in their history and culture. One of the lessons they learnt which led to changes in the way they work was the realization that fully distributed development is difficult because of problems of integrating work effectively: “It is very difficult to have different teams that are distributed working on the same component because the elements they develop might not work together in the end”, “In the past we had joint development of a group of people who were distributed, but that was not working. It was a painful process”. The disadvantages of fully distributed development were found to outweigh the benefits. For example, management of the project and coordination of work was difficult as the dependencies of the different technical components were too many, something which also made integration difficult and messy. Full distribution of the work ,originally believed to be the right way to approach the situation, presented a contradiction to what was actually going on in practice. Their answer to this contradiction was the idea of (in their own words) “Clusters of Competence” which enabled them to structure the development in different competent clusters: “We are trying to go to a situation where one component is being developed by the same group of people who are all in one place”, “We have come up with the idea of strong collaboration between the developers in order to manage the dependencies. Now all the dependencies are such, that there are no conflicts”. They have created different globally distributed patches of expertise, where experts are co-located, which are then all aligned into a network that facilitates and coordinates the work and the collaboration. Furthermore, this network facilitates communication and sharing of knowledge and expertise among the different clusters: “The idea here is to have strong collaboration within the developers group as well as collaboration with other countries in order to share knowledge and expertise”. Although the clusters of competence were found to provide discipline in messy situations, for the development of some components of the Grid this was not possible and hence there are still people working in a truly distributed way. While virtual communication is important for knowledge sharing and standardization, particle physicists still encourage temporary co-location of developers and therefore establish frequent face-to-face check points since such co-location can resolve future problems, improve awareness, forge understanding and enable strategic thinking: “Developers from
94
other countries come here and work for 3 months and this is much more efficient.. It’s much easier to control the activity rather than when people are away. There is still freedom and a level of autonomy but for large features we discuss and decide how to proceed. So there is freedom in a controlled way.” BALANCING EXPERIMENTATION WITH DISCIPLINE. All particle physicists must write computer software in order to undertake their physics analysis since packaged applications for this task do not exist. They also have a tradition of large collaborations for developing detectors, accelerators and experiments; however, they do not have formal training in software engineering or traditional systems development: “As a physicist you do not get much experience in writing software which stays up and is reliable. We work through trial and error and through this you do not get the experience in writing code for stable services”. They are pragmatic, dirty programmers who like working solutions. They believe in producing things that work quickly and that is why they do not usually prefer the fancy well-thought and well-developed solutions who take time to be developed. Indeed one of the interviewees, jokingly said: “We are intelligent people; we don’t make bugs so there is no need for methodologies”. For them developing software is an experimental activity involving trial-and-error in a way similar to the way physics itself is undertaken. However, when asked about this unstructured “experimental” (and risky) way of working, they have all agreed that in such kinds of distributed development projects someone must combine this agility/flexibility together with limited structure/discipline: “One thing the project learned is that you need management and clear short-term priorities, or else you drift”. Because this project consists of both particle physicists and computer scientists, some discipline in development activities is seen as needed in order to balance the developers’ individual goals with the shared objective. As one interviewee stated: “Computer scientists think about technology per se and don’t care about the physics underlying this technology...Computer scientists are not motivated and driven by the same goals as physicists”. On the other hand, it is also crucial to maintain this flexible and agile character in the way they work in order to quickly adapt and respond to environmental changes: “We need to formalize the process a bit, but it should allow modifications in a fast pace. Quick feedback from users as well as quick releases for testing is required”. A SENSE OF BELONGING Interestingly, when asked what other communities could learn from the distributed development of their Grid, interviewees suggested that what makes their project progress is a combination of factors. As they have argued: “We’ve many times seen the development of systems by isolated groups involving formal procedures. But these didn’t have good results”. Therefore something more than colocation and formal procedures is needed in order for such kind of virtual projects to be successful: “Social is the key really. It makes such a huge difference when people work together for the right reason. True quality comes from within”. Indeed creating a strong sense of community with shared goals is crucial for their collaboration: “So it is not so much a software development, the story we have to tell, it is building this community around the grid computing”. “Collaboration and building community is really important for distributed development. We work a lot using mailing lists; you can see the different attitude people have before and after they meet in person in those mailing lists”. Shared goals provide motivation and an identity is constructed for those involved in the development of the Grid: “If there is a problem that comes up then people will stay up at night, or weekends, and figure out how to get it to work, and it may be a horrible hack, but it will work. There are people out there with a lot of motivation and a lot of knowledge and a lot of skill who will come up with the solutions”. This sense of community is highly related to the frequency of the face-to-face interactions, the extensive communication flows, timely feedback, keeping all people involved, creating a trustworthy environment, but also depends on the equally important indicators of shared identity such as logoed pens, posters, t-shirts worn in conferences etc., and an intense focus on disseminating the project’s successes. The feeling of belonging to a group also balances competitive relationships: As they argue: “Proper management of competition leads to successful outcomes. Without competition brilliant ideas are killed. That is why we work through competing solutions and the best one wins out”. In conclusion, particle physicists appear both highly unusual and somewhat traditional in the way they work. Freedom, trust, consensus, charismatic leadership, shared goals and internal motivation are all distinct characteristics of the community and are seen to be its major driving forces. However, this is
95
not to say that politics do not exist or that competition is minimized. Rather, healthy completion exists which helps bring competence in the community and blends expertise.
CONLUSIONS This paper examined the distributed development of Grid infrastructure in a particle physics setting. Specifically, the focus of concern was to explore the collaborative practices employed by particle physicists in their attempt to develop a usable Grid for their research, and that of other communities, with the aim of offering answers that may be translated into the wider context of global virtual development. A number of strategies for distributed development were presented based on lessons particle physicists learnt throughout the Grid’s development. Particle physicists’ collaboration has been described to be exceptional. Their collaborative way of working are rooted in the community’s history and culture something which has highly influenced the practices and strategies employed in Grids development. Yet their means of coping appear both unusual and yet somewhat orthodox. In summary the strategies identified were: 1) Structure the development effort in clusters of competence, 2) Encourage temporary co-location of developers, 3) Combine flexibility/agility with structure/discipline, 4) Create a sense of belonging and therefore construct identity for those involved in the development, 5) Facilitate human communication both through virtual means and face-to-face (at least every couple of months), 6) Create a trustworthy environment, 7) Have clear shared goals and rationale. These lessons resonate with many of the trends in management theory around effective distributed working, yet the fact that they emerge from a unique community founded not in existing bureaucratic commercial organisations but in a communitarian science practices provides important evidence to this ongoing debate. Further the study demonstrates the value of employing Activity Theory in researching global systems development practices to explore their inherent contradictions. Clearly, the unique and obscure nature of the community under study is also a limitation to the study. The practical recommendations provided here should not be seen or treated as prescriptive canonical rules but hopefully they will allow others involved in such distributed virtual development projects to reflect on their practice and their context in light of them.
REFERENCES Agerfalk, J. P. and B. Fitzgerald (2006). "Flexible and distributed software processes: Old returns in bew bowls?" Communications of the ACM 49(10). Bajec, M., M. Krisper, et al. (2004). The scenario for constructing flexible, people-focused systems development methodologies. ECIS conference. Berman, F., F. Geoffrey, et al. (2003). The Grid: past, present, future. In: Grid Computing- Making the Global Infrastructure a Reality. F. Berman, F. Geoffrey and T. Hey, John Wiley & Sons. Bertelsen, O. W. (2000). "Design artifacts: Towards a design-oriented epistemology." Scandinavian journal of information systems 12: 15-28. Bertelsen, O. W. (2003). Contradictions as a tool in IT-design: Some notes. 8th European conference of computer supported cooperative work (ECSCW), Helsinkin, Finland. Carr, N. (2005). "The End of Corporate Computing." MIT Sloan Management Review 46(3): 67-73. Chetty, M. and R. Buyya (2002). "Weaving computational grids: how analogous are they with electrical grids?" Computing in Science & Engineering 4(4): 61-71. Chompalov, I., J. Genuth, et al. (2002). "The organization of scientific collaborations." Research Policy 31: 749-767. Ciborra, C. (2002). The Labyrinths of Information: Challenging the Wisdom of Systems. Oxford, UK, Oxford University Press. Cockburn, A. and J. Highsmith (2001). "Agile software development: The People factor." IEEE Computer Colling, D. J. (2002). GridPP - Developing an Operational Grid. UK e-Science All Hands Conference, Sheffield. Crawford, K. and E. Hasan (2006). "Demonstrations of the activity theory framework for research in information systems." Australian journal of information systems 13(2): 49-68.
96
Damian, D. and D. Moitra (2006). "Global software development: How far have we come?" IEEE Software. De Souza, B. C. and F. D. Redmiles (2003). Opportunities for extending activity theory for studying collaborative software development. 8th European conference of computer-supported cooperative work, Helsinki, Finland. Doyle, T. (2005). Meeting the particle physics computing challenge, Public Service Review: Department of Trade and Industry. Engestrom, Y. (1987). Learning by expanding: An activity-theoretical approach to developmental reasearch. Helsinki, Orienta-Konsultit. EuropeanCommission. (2006). "Future for European Grids: Grids and service-oriented knowledge utilities." ftp://ftp.cordis.europa.eu/pub/ist/docs/grids/ngg3_eg_final.pdf Retrieved 22/02/2007. Fitzgerald, B. (2000). "Systems Development Methodologies: The Problem of Tenses." Information Technology & People 13(3): 13-22. Foster, I. and C. Kesselman (2003). The Grid in a Nutshell. In: Grid Resource Management – State of the Art and Future Trends. J. Nabrzyski, J. Schopf and J. Weglarz, Kluwer Academic Publishers. Foster, I., C. Kesselman, et al. (2003). The Anatomy of the Grid. In: Grid Computing – Making the Global Infrastructure a Reality. F. Berman, F. Geoffrey and T. Hey, John Wiley & Sons. Guy, S. E. (2003). Patterns as artifacts in user-developer collaborative design. 8th European conference of computer-supported cooperative work, Helsinki, Finland. Hanseth, O. and E. Monteiro. (1998). "Understanding Information Infrastructure." http://heim.ifi.uio.no/~oleha/Publications/bok.html Retrieved 15th of June, 2006. Herbsleb, D. J. and A. Mockus (2003). "An empirical study of speed and communication in globally distributed software development." IEEE Transactions on Software Engineering 29(6): 481-494. Herbsleb, D. J. and D. Moitra (2001). "Global software development." IEEE Software. Herbsleb, D. J., J. D. Paulish, et al. (2005). Global software development at Siemens: experience from nine projects. International conference on Software engineering (ICSE'05). St. Louis, Missouri, USA, ACM. Highsmith, J. (2003). Agile Project Management: Principles and Tools. Cutter Consortium. Arlington, MA. Highsmith, J. and A. Cockburn (2001). "Agile software development: The business of innovation." IEEE Computer Society. Knorr-Cetina, K. (1999). Epistemic Cultures: How the sciences make knowledge. Cambridge, MA, Harvard University Press. Korpela, M., A. Mursu, et al. (2002). "Information systems development as an activity." Computer supported cooperative work 11(111-128). Kuutti, K. (1991). Activity theory and Its Applications to Information Systems Research and Development. Information Systems Research: Contemporary Approaches & Emergent Traditions. H. E. Nissen, H. K. Klein and R. Hirschheim, Amsterdam: North-Holland: 529-550. Lacity, M. C. and L. P. Willcocks (2001). Global Information Technology Outsourcing: In Search of Business Advantage, John Wiley& Sons Ltd. Lee, G., W. Delone, et al. (2006). "Ambidextrous coping strategies in globally distributed software development projects." Communications of the ACM 49(10). Leontiev, A. N. (1978). Activity, consciousness and personality, Prentice Hall, Englewood Cliffs, NJ. Lloyd, S. (2006). From Web to the Grid. PPARC and Parliament" and "House" magazine. Mwanza, D. (2001). Where theory meets practice: A case for an activity theory based methodology to guide computer system design. Proceedings of INTERACT' 2001: 8th IFIP TC 13 conference on Human-computer Interaction, Tokyo, Japan. Nandhakumar, J. and D. Avison (1999). "The fiction of methodological development: a field study of information systems development." Information Technology & People 12(2): 176-191. Nardi, B. (1996). Context and Consciousness: Activity theory and human computer interaction. Cambridge, MIT Publisher. Nerur, S., R. Mahapatra, et al. (2005). "Challenges of Migrating to Agile Methodologies." Communications of the ACM 48(5). Orlikowski, W. J. (2002). "Knowing in practice: Enacting collective capability in distributed organizing." Organization Science 13(3): 249-273. Parnas, D. (2006). "Agile methods and global software development (GSD): The wrong solution to an old but real problem." Communications of the ACM 49(10). Pickering, A. (1995). The Mangle of Practice. Chicago, Chicago University Press.
97
Russo, N. and E. Stolterman (2000). "Exploring the assumptions underlying information systems methodologies: Their impact on past, present and future ISM research." Information technology and people 13(4): 313-327. Shrum, W., J. Genuth, et al. (2007). Structures of Scientific Collaboration. Cambridge,MA, MIT Press. Smarr, L. (2004). Grids in Context. In: The Grid2: blueprint for a new computer Infrastructure. I. Foster and C. Kesselman, Elsevier Inc. Traweek, S. (1988). Beamtimes and lifetimes: The world of high energy physics. Cambridge MA, Harvard University Press. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard, Harvard university press. Yalaho, A. (2006). A conceptual model of ICT-supported unified process of international outsourcing of software production. 10th IEEE International Enterprise Distributed Object Computing Conference Workshops (EDOCW'06), IEEE. Ye, Y. (2005). Dimensions and forms of knowledge collaboration in software development. Proceedings of the 12th Asia-Pacific software engineering conference (APSEC'05). Zheng, Y., W. Venters, et al. (2007). "Distributed Development and Scaled Agility: Improvising a Grid for Particle Physics." Under review for International Journal.
98
ELECTRONIC DEMOCRACY SPACES1 Ioannis Orfanos PhD Candidate School of Architecture NTUA / MArch: Design Research Laboratory, AA / MPhil: Architecture - Space Planning, NTUA / Architect NTUA Papadopoulos Dimitris PhD candidate, School of Architecture, NTUA / MPhil: Architecture - Space Planning, NTUA / MSc: Adaptive Architecture and Computation, Bartlett, UCL / Architect NTUA
Abstract The spaces of electronic democracy are organized and function as two parallel networks that coexist in a constant interaction: an urban network that refers to the city and a digital network that refers to cyberspace. Groups of people come together and create subsequently multiple combinations of collectivities and common activities. Copyright © 2004 NTUA
1. SHORT DESCRIPTION OF THE PROJECT Spaces of electronic democracy function as generators for political actions. They are located next to spaces of public flows and become means of physical and digital convergence. The virtual space, as it is expressed and represented in the Internet, augments their tactile, physical entity, while physical space supports the digital interaction by ensuring the interpersonal contact, a necessary factor for meaningful democratic dialogue. Those two spaces, digital and physical, must coexist to make the electronic democracy spaces accessible to everyone. Of course spaces like squares and markets, where public dialogue takes place, already exist in the city life. There are also virtual equivalents like chat groups on the Internet. In the first occasion information is just an instance expression of personal speculations and in the second information is usually lost in the chaos of networked places. Rarely there is a political reaction on the subjects that are discussed. In the digital and physical space of the project the information of the dialogues is recorded, published and provided to the public in order to be manipulated and redefined. The “vima” is transformed in a hyper-space, a spatial “hyper-text”, accessible by anybody. The word “vima” comes from the ancient Greek word for tribune in the ancient market where anyone could express his political ideas. Digital and physical space is inseparably connected. Spaces of electronic democracy are organized and function as two parallel networks: an urban network that refers to the city and a digital network that refers to cyberspace. The digital network interconnects the localities of e-democracy spaces (”vima”) and offers multiple combinations of cooperation and coexistence. Internet augments the remote interconnection, importing the whole project in an undefined scale in cyberspace. The central website is the connection medium of all the digital nodes. The structure of each site refers to the scale of a collective interconnection but also to the small scale of each node independently. Each physical area has its digital “vima” simulation as a node of the network. The website hosts a virtual space that represents the totality of connections. It indicates the "representation of the structure of information (Engeli, 2001). This space helps users to navigate by using a more direct and understandable interface through interaction. Each e-democracy space is extended to a parallel virtual space that hosts some of its basic characteristics, as tele-communication, publication of subjects, activities, database and administration information.
2. CORE ELEMENTS OF THE PROJECT 2.1 ELECTRONIC DEMOCRACY Networked, on-line communities seem to increase continuously and the demand for political participation becomes more and more urgent. Virtual communities, social forums, initiatives for social 1
Diploma Thesis Project, N.T.U.A., School of Architecture, 2004 Project link: http://www.dixtio.net
99
action and alternative communication media arise. Electronic democracy should allow the active participation of citizens in common issues. It is a direct democracy, experienced as a democracy of dialogue. “As politics moves into the space of mass media, the right to direct public media access, the right to broadcast, is becoming increasingly important” (Sikiaridi & Vogelaar, 2003). “People will meet in cyberspace, as in the past in a Greek city at the agora, in order to decide about common matters. The possibility of an electronic vote by the whole body of citizens on the issues presently decided by the representative bodies is obvious. The only complications are: the procedure for the preparation of bills in these circumstances, and the question of bills introduction to vote” (Kaczmarczyk, 2003). Electronic Democracy can be expressed in many forms. First of all, there is a confusion about the meaning of electronic democracy that integrates electronic governing. The latter represents the interaction between the citizens and the government and the first the interaction of people among them. “If the procedure is solely based on the pressing of a button then we don't have democracy but a deceleration of volunteers.”(Stagliano, 1996) The direct interaction looses its importance and is transformed into a dangerous multiplier of inertia. The participation of the people in electronic voting refers only to the answering of questions imposed and decided by third persons. The aim is the transition from the present condition to hyperdemocracy and not to cyberdemocracy, the latter has been advertised as an ideal situation. The danger of imposed predefined directions, by the oversimplification of political processes into electronic voting is evident and imposes a structure less democratic than today's governing system. The misuse of the meaning of electronic democracy from institutions and political organizations misleads the public and leads to erroneous conclusions about the possibilities of information technology. On the contrary the participation in common problems, in the decision making and in collective activities, through the interconnection of everyone with everyone, reflects the fundamental notions of hyperdemocracy. 2.2 COLLECTIVE INTELLIGENCE In order to move towards advanced and mature forms of e-democracy the development of collective intelligence and the creation of a process system are required. The main principles of collective intelligence according to Pierre Levy are: -recreation of social bond through knowledge exchange -acknowledgement, acceptation and award of particularities -more direct and participial democracy -invention of new forms of open collaboration for the solution of problems -management of software and cultural infrastructure of collective intelligence 2.3 GENERAL SCHEME The project is structured around the following scheme: collective intelligence > intermediate rulesadaptive system > network structure > hybrid public spaces > urban network- digital network. Intermediate rules-adaptive system: “the function of an object as a mediator of collective intelligence presupposes always a bond, a game rule, a convention [...]. The object itself is not enough to cause the emergence of collective intelligence.” (Levy, 1998) Proposed network structure: The network is a structure that embodies two dimensions: the local and translocal. Those two should be in equilibrium in order for the network to function properly. Without nodes there is no network, there are only lines that transmit information. If there is no connection between the nodes they don't consist a network but just dispersed points in an indefinable environment. It becomes obvious that the connection of the objects tend to be more important than the objects themselves. The network reinforces and cultivates collective intelligence and its nodes are public spaces that absorb social dynamics. Hybrid public spaces: “Public media event spaces and public “hybrid” (media and urban) interfaces are proposed as an infrastructure for urban/regional planning, for developing communal visions of our worlds. These communication spaces for urban issues could develop into very important forums for the mediatised, regionalized and globalized politics of the future”(Sikiaridi & Vogelaar, 2003). Such a space is accessible to everyone, can be altered and modified but not appropriated. Is the result of numerous
100
heterogeneous actions and activities and not the outcome of a specific morphological program as public forces in time shape and alter the initial proposal. Urban Network – Digital Network: In the nodes digital and urban network come together and coexist in constant interaction. On-line communities by default constrain the physical coexistence of the users. In contrast we propose a multiple space where collective intelligence evolves in the inverse: digital connection supports and triggers physical contact. Spaces of electronic democracy are organized and function as two parallel networks: The urban that refers to the city and the digital that refers to the cyberspace
3. STRUCTURE OF THE PROJECT 3.1 THE DIGITAL NETWORK A digital network interconnects the localities of e-democracy spaces (”vima”) and offers multiple combinations of cooperation and coexistence. Internet augments the remote interconnection and imports the whole project in an undefined scale in cyberspace. The connection of those physical nodes is supported by a central website that can be modified according to the needs of the users. The site consists the minimum interface for publication of various projects. The relation of the “vima” with the network has transformable character. The virtual extension of the physical space is depended from the users that manage it at any given time. Each node has its own digital and physical memory, that is not connected with only a particular social group. On the contrary human factor is the most important parameter for the stature of each “vima” on the Internet. In addition the isotropic behavior of the system is obtained by its scale-free character.
3d visualization of digital network
“Online meeting places can simultaneously strengthen others, and even create new ones. And they are clearly creating a condition under which individuals position themselves less as members of discrete, well-bounded civic formations and more as intersection points of multiple, spatially diffuse, categorical communities.” (Mitchel, 1999) “Images on a computer screen, as opposed to video or even “live” TV, are not mere “replays”, like photography, but “remakes” of reality. In fact they are the ideal laboratory for ideas and design because they are like imagination itself, free and fluid, but, like the real world, objective, published, shareable.” (Derrick de Kerckhove,2001) “Interaction, between the users and the pieces of information they handle, may become information itself. It is the automatic translation of these interactions that is taken over in the fuzzy interfaces.
101
Accordingly, it is not only the interface graphic design that has to be conceived, but also the design of the structure of information itself.” (Levy, 2004) 3.1.1 WEB EXTENSION OF E-DEMOCRACY SPACES The central website is the connection medium of all digital nodes. The structure of each site refers to the scale of a collective interconnection but also to the small scale of each node independently. Each physical area has its virtual “vima” as a node of the network. The main features that are provided by the digital extension of “vima” are: “Actions”: asynchronous procedures “Real-time”: synchronous procedures “Archive”: database of “vima” “Administration”: ways of managing a “vima” 3.1.2 VIRTUAL SPACE In the website exists a virtual space that represents the totality of connections. Each e-democracy space is extended to a parallel virtual space that hosts some of its basic characteristics, like telecommunication, publish of subjects, activities etc. The first level of navigation in virtual space concerns a form that simulates “digital network”. It indicates the condition of the digital network in time, a relation that is translated through changes at the aggregate of the network. In future, the first level of navigation could be enhanced, when a potential complexity and expansion of e-democracy spaces occur. In that case the virtual space of first navigation level could perform as a topological mapping of “hot” areas of the included spaces, displaying areas with action and interest.
information variation in virtual space of e-democracy spaces
The second level of navigation is about the interior of network's structure of information. Nodes and interconnections are presented in order users to be able to navigate, explore and choose their areas of interest. “The interface, as an interaction tool between user and information, must act as a filter whose adjustable mesh eliminates the noise generated by information overload. The interface only retains the signs interpretable by its user. The use of meta-languages allows for information tructuration independently of the languages involved. Accordingly, the design of interfaces allowing for information manipulation through meta-languages essentially consists in interaction design and in information design.” (Levy, 2004) “Form and content of virtual space are generated from the history of different users”. (Engeli, 2001)
102
3.2 THE URBAN NETWORK As physical interaction is indispensable, e-democracy spaces are developing simultaneously as a physical network (a building infrastructure) in city. The proposed urban network consists of stable nodes and has defined scale. These nodes are environments that merge digital with physical. “Vima” is the building unit and the physical expression of electronic democracy spaces. It is the means of merging the physical and digital social activity. The structures are placed in free locations that are in direct or indirect relation to specific public spaces of the city. Locality plays an important role as it is connected with the main topic of the whole gesture, that is the problems that have to be solved. This “occasional” network can deal with total assaults (people defend much easier their tangible, physical, own space). The aggregations of “vima” are extensions of public collective life, while they absorb the dynamics of the neighborhood. The power of this network is augmented with the co-existence of the digital network. Groups of people come together and create subsequently multiple combinations of collectivities and common activities. These spaces are located in proximate areas in order to communicate more easily and directly. The urban network can response to a potential need of physical interaction and activation of digital interconnected groups. “Connectivity works best in face-to-face interactions, simply because heads and bodies are still the best available information-processing devices.” (Derick de Kerckhove, 2001)
“Next to their "virtual communities" on the Internet, people might choose to experience community in their local neighborhoods. This use of public space in the surroundings of the home will not be indispensable; it will be a choice. People will voluntarily situate activities in their local surroundings if the neighborhood is attractive. And the neighborhood is more attractive if it is lively. Therefore activity will attract activity.” (Sikiaridi & Vogelaar, 2003)
urban network in city
“Real space will change in character, its very specific qualities as an environment for direct physical encounter and experience, as an generator of (intuitive) trust needed for social cohesion, becoming more pronounced.” (Sikiaridi & Vogelaar, 2003)
4. “VIMA” The specific morphological solution is only an indicative shell of the functional program and of reasons of design. It aims with the simplest solution to act as the generator of collective memory and intelligence of each local community, in order to transform them in actions. That is why the spaces of electronic democracy are situated close to public spaces. The outcome result is not predictable as the spaces cannot by themselves generate political actions but only host and encourage those already existing or those that now in hibernation await for their activation.
103
"vima": e-democracy space unit
4.1 FIRST LEVEL - PUBLIC SPACE AS INTERFACE The first level of “vima” is open to public flow. It is about an accessible space to everyone that could be adapted but not privatized. The transformable-adaptive floor to different scenarios gives the opportunity to users to interact with physical space and to adjust it according to their needs and desires. A transformable-adaptive floor gets modified under a matrix, which allows the development of several scenarios (seating, workspace e.t.c). As a contemporary version of the ancient Greek agora, an infrastructure of developing dialogue is offered. Therefor the dimension of time is involved in the design procedure of “vima”.
4.2 SECOND LEVEL - MATURE ACTION The second level indicates the maturing process of the activities at the first level of “vima”. Here the provided infrastructure supports the conduct of specific activities, which concern the production of “messages” and social actions. The technical means are provided for the elaboration and the subsequent publication of the discussed subjects. Alternative groups can manage the “vima” and support their initiatives. 4.3 TECHNOLOGICAL INFRASTRUCTURE The physical spaces are composed from a combination of conventional structure and contemporary digital components. The structure is made of iron beams and metal grids. On the ground level the transformable/adaptive floor is controlled by an electro-mechanic platform composed of worm-screw mechanisms. Hydraulic pistons were rejected as they require a press tank that would increase the necessary area along with the energy consumption. The floor provides alternative scenarios for hosting different public actions. Its transformation is controlled by a central terminal on the ground floor where a graphical interface simplifies the process. This space is equipped with flexible terminal computers that come out of the ceiling that consists the floor of the upper level. On the upper, more closed level, each of the four walls is designed in a way to allow more mature and specific actions.
104
Computers along with printers and plotters are used to publish the ideas that emerge from the combinatorial activities of the users. The design of the walls creates desks, libraries and closets in order to provide a more stable and friendly environment for more specified and long term dialogue. Nested in the width of one wall there are also water facilities: a small kitchen and a WC.
5. INTERMEDIATE SPACES 5.1 INTERMEDIATE SPACES & RESEARCH EVOLUTION The project e-Democracy Spaces function as a mechanism that could trigger common activities and therefore generate the qualitative presuppositions of the multitude which “designates an active social object, which acts on the basis of what the singularities share in common. The multitude is an internally different, multiple social object whose constitution and action is based not on identity or unity but on what it has in common” (Hardt & Negri, 2006). Nevertheless, the project cannot represent the fulfillment of electronic democracy per se. On the other hand we think that e-democracy spaces are dynamic catalysts. “The production of the common is neither directed by some central point of command and intelligence nor is the result of a spontaneous harmony among individuals, but rather it emerges in the space between, in the social space of communication. The multitude is created in collaborative social interactions” (Hardt & Negri, 2006). Contact and interaction of individuals provide the potential for the creation of a network of intermediate spaces. The initial digital network is affected by a parasite network, which could be developed in the intermediate spaces of the interconnected e-democracy spaces. This type of evolution refers to the true implementation of electronic democracy. Further evolution: from the hybrid network of e-democracy spaces to a network of intermediate hybrid spaces. “Intermediate space: this extraordinary space derived from the concept appears as a “gift” or “supplement”: a space where anything might happen; a place of experimentation; a place located on the margins.”(Tschumi, 1999) Today through our research we are trying to conceptually shape that intermediate space. The next step is to approach the transition from e-democracy spaces to intermediate spaces. The latter are less defined, more flexible, more spontaneous, more merged between physical and physical and finally more unpredictable. Our research is evolving towards the understanding and conceptual definition of intermediate spaces on one hand as means of thought and on the other as a virtual and constantly developing project itself. Our activity in the academic environment deals with the issues of networks, information technology and social implementations, collective intelligence through interfaces, interaction design and adaptive environments. Software and models along with theoretical papers still address the key-elements of electronic democracy spaces.
REFERENCES Engeli M. (2001) Bits and Spaces, Architecture and Computing for Physical, Digital, Hybrid Realms, 33 Projects by Architecture & CAAD, ETHZ. Basel-Boston-Berlin: Birkhäuser. Hardt M., Negri A. (2006). Multitude, London: Penguin Books. Kaczmarczyk A. (October 2003). Perspectives of Cyber-democracy. Retrieved from http:// www.imm.org.pl/ pdf.php?id=64 de Kerckhove D. (2001). The Architecture of Intelligence. Basel-Boston-Berlin: Birkhäuser. Levy P. (1998). Becoming Virtual, Reality in the Digital Age, Perseus Books .
105
Levy P. (June 2004 ). Open Networks of Collective Intelligence, Retrieved from http://ru3.org/ ru3 /project /concept /interface.htm Mitchell W. (1999). E-topia. MIT Press. Stagliano R. (May 1996). Qu’Est-ce Qu’Une Democratie Electronique. Le Monde Diplomatique. Retrieved from http://www.monde-diplomatique.fr/1996/05/STAGLIANO/2775 Sikiaridi E., Vogelaar F.(December 2003). The use of space in the information/communication ageprocessing the unplannable. Infodrome 2000. Retrieved from http://www.infordrome.nl Tschumi B. (1999). Tschumi: Le Fresnoy. Monacelli Press.
106
INTRAVEIN - PARAMETRIC URBANSIM Pavlos Xanthopoulos Architect NTUA, AA DRL MArch 7 Voreou St., 16671, Athens, Greece tel. +30 210 9236541, email: pavlos.xanthopoulos@gmail.com Yiannis Orfanos Architect NTUA, AA DRL MArch, NTUA Mphil 28 Kleomenous St., 10676, Athens, Greece tel. +30 210 7250530, email: orfanos80@gmail.com Brian Dale AADRL M.Arch 66B Princess May Road, London, N16 8DG, United Kingdom tel. +44(0)7900440550, email: briancurtisdale@gmail.com Gerard Thomas F. Joson Architect, AA DRL MArch 21 Mt. Fairweather St., Filinvest 1, Quezon City, Philippines 1100 tel. +63 918 9115924, email: gjo507@me.com
Abstract: the paper is about a form of networked urbanism distributed in East London, consisting of a system of infrastructural and leisure clusters of cellular units, combining as a connective tissue of bridges and islands, adapting and negotiating as both a physical and informational network. Embedded with self-learning behavioural and responsive systems, it allows for an intelligent choreography of soft programmatic spaces to create new leisure experiences, negotiating the changing effects of time, weather, programmatic, and crowd dynamical inputs, extending parametric processes to drive urban performance. Copyright Š 2007 AADRL
1. INTRODUCTION Intravein is a proposal of a parametric urbanism for the region of Stratford, East London, UK, which explores the integration of adaptive spaces within a networked urban system while taking into account the dynamics of cultural, social, and economic flows. Carried out under the agenda of Parametric Urbanism at the Architectural Association’s Design Research Lab, the research seeks to explore new forms of urbanism through the criteria of parametric design using Stratford and the surrounding 2012 Olympic development area as a case study. The research begins with exploring parametric design techniques in experimentation with the embodied flow of information within the city, leading to a proposal of an urban system consisting of infrastructural and leisure cellular units which combine into a network of bridges and urban islands, adapting and negotiating both physical and informational networks. These cells are embedded with self-learning behavioural and responsive systems, allowing for an intelligent choreography of soft programmatic space to create new leisure experiences. By negotiating the changing effects of time, weather, programmatic, and crowd-dynamical inputs, the parametric processes are extended to drive the urban performance.
107
Aerial view of the proposed Stratford Bridge.
2. DESCRIPTION OF THESIS PROJECT KNFRK propose a cellular form of networked urbanism, one where the large scale is made up of a series of differentiated elements distributed through the fabric of the city, where the urban parameters that feed the design are constantly indexed and used to drive the performance of the system. Through a neurological connection of these discrete elements the network gains the potential to constantly adapt to as well as itself adjust the dynamic conditions of the city. This network consists of a system of infrastructural and leisure clusters of cellular units that combine as a connective tissue of bridges and islands negotiating the old and new Stratfords of east London as a physical and informational network. These clusters are embedded with self-learning behavioral and response systems, allowing for an intelligent choreography of soft programmatic spaces to create new leisure experiences that negotiate the index of changing effects of time, weather, programmatic, and crowd dynamical inputs.
3. NETWORKED BEHAVIORS 3.1 INTELLIGENT SPACES The system develops an intelligent infrastructure as a connective tissue to link the two poles of Stratford. Given the extensive existing and proposed commercial development in the site, a series of cells of leisure uses are proposed along a bridge and within islands in Stratford City. These cellular units are injected with certain behaviors, allowing one cell to adapt to different uses depending on the information that is indexed by the system, as a cinema reorganizes into a series of karaoke rooms. The project is scenario-based, it cannot perform only in one way but rather must negotiate and reorganize according to the incoming information, taking on multi-state behavior. The bridge at its most basic functions as a piece of urban infrastructure connecting two sides, but this typology is expanded to offer differing pathways, opportunities, and situations to the users as they inhabit the ever adjusting bridgescape. This logic is then extended into the surrounding neighborhood fabric to develop a physically discontinuous but informationally linked archipelago of behavior spaces.
108
Thus, we have to be looking for a mechanism that reflects the criteria of the system. A mechanism of this kind can be considered a control mechanism of the entire procedure that should be built around a managerial intelligence that concerns the management of spaces during and after the design of parametric spaces. Is it possible to generate spaces based on algorithmic design and the logic of DNA so that the autonomy of forms and spaces wouldn’t be related with a predefined or even an autonomous kinetic cell in performance system? On the other hand what is the potential of a relationship (exclusively or not) with a system that implies the active participation of users that interact and drive the system? This managerial intelligence could be found within the society itself instead of being left at the discretion of managers-creators. In this hypothesis, a collective intelligence could reciprocate with the genetic material of architectural objects - abstract machines. ”Once knowledge becomes the prime mover, an unknown social landscape unfolds before our eyes in which the rules of social interaction and the identities of the players are redefined (Levy, 1997).” In this social landscape humans participate in a dynamic, intelligence based relationship with their environment (either artificial or natural). In a way we assume that management of machinic architectural objects could be realized by collectives. Of course this scheme could not guarantee a harmonic evolutionary model based on total control and guidance of a system. Rather the development of individual responsibility to navigate and communicate inside our broader social eco-system is necessary, a ”Fuzzy aggregate, a synthesis of disparate elements, is defined only by a degree of consistency that makes it possible to distinguish the disparate elements constituting the aggregate” (Deleuze and Quattari, 1987). 3.2 SYSTEM NETWORKING The proposal is structured around a system of networked elements with different scales and behaviours that are seeded and then grow within Stratford city as our case study. These elements concern activities of urban leisure such as entertainment, commerce, community activities and public infrastructure. The main urban intervention is translated through input parameters to spatial output: evolution from the initial spatial unit of the single cell to the urban scale of Stratford (cell > cluster > island > urban network) while allowing for human-to-space interaction. 3.3 NETWORKED URBANISM Based on a fractalized logic of evolution, the system consists of cells as spatial units, of clusters as aggregation of cells, of islands as synthesis of clusters and finally as an urban archipelago of networked islands. All these components are interconnected through specific scalar formal and informational relationships, but also through a shared negotiation of adaptivity and interaction, offering in this way a rhizomatic evolutionary perspective. Unpredictability becomes a basic factor of networked urbanism that parametrically enhances the urban experience and creates an ecosystem where cells, clusters, and islands interact without a linear dialogue with their environment and users.
4. INFORMATIONAL EXPERIMENTS 4.1 CROWD DYNAMICS Crowd dynamics are used as a simulation technique in order to explore organizational behaviors that evolve in time. The generation of organizational models is based on various parameters (number, cohesion, separation, alignment of direction and speed, type and number of targets) that take specific values according to time (day/week/season). Time and space criteria are merged as time based parameters feed the behavior of agents through attraction points.
109
The attributes of these targets are differentiated following various desire properties that are then translated to local spatial conditions and crowd desires. These studies suggest that optimum organization of space varies in time and that parametric tools can simulate and reorganize qualitativequantitative spatial and temporal characteristics. 4.2 HUMAN INTERFACE Interfaces play an important role within the scheme. It is about a man-machine interaction whose center is activated inside the system, a mediator-linkage. Through my choices inside an informational structure made using an interface, I participate in a collectivity that altogether informs a system with data. This data affects not only the performance of a space, but primarily the algorithm-code that controls the space. So being aware of my input, i contribute to an intelligent collectivity that deals with the system as a collective input. Intelligent collectives, ”similar to vernacular archetypes and prototypes that have been developed and adapted for different sites, environments and individual requirements” (Frazer, 1995), can interact in the complex context of contemporary urban life. In this way a decentralization of the system follows, implying loss of control by the author. In reality, it ”removes the architect from the design process: giving the environment the ability to design itself and to have autogenetic existence... is about a new kind of architecture without architects (and even without surrogate architects) (Negroponte, 1975)”.The triptych intelligent collectivity-reorganized space-interface can be reduced to the primal triptych human-machine-interface. The idea of an interface as part of an urban network becomes important, constantly feeding the parametric behavior of the system with data. The digital extension is an integral part of urban activity and merges intensively with spatial urban experience. The code of the system becomes open, culling feedback from people’s desires. This could be translated to become conscious of taking responsibility about our social relations parallel to deep embodiment of technology. A parametric space managed by collective intelligence implies a complex system and an abstract communication relationship between user and interface. ”As clusters of tools, procedures, and metaphors, technologies configure a platform for discourse and ideology. Such a technical-discursive ensemble is modifiable through politics, yet it has political orientations built into it system” (Crandall, 2005) . The potential all of these references is related to a collective organizational logic that accelerates social connections. Offering open procedures to users is not a risky decision, but on the contrary it is necessary to deal with specific existing conditions and to create new ones as well. ”The architecture of the exodus will give rise to a nomadic cosmos that travels the universe of expanding signs; it will bring about endless metamorphoses of bodies; Far from engendering a theater of representation, the architecture of the future will assemble rafts of icons to help us cross the seas of chaos. (Levy, 1997)”. Human interface: Information distribution through the network of clusters to islands and finally to the urban archipelago as a whole.
4.3 USER INPUT INDEXING “Consumer becomes producer in an automated system.” (Mcluhan) The connection between MAX/MSP and Flash through a user interface allows for the adjustment of global parameters concerning physical space from the users ubiquitously and at the same time enhances urban experience as an information visualization medium.
110
Personal involvement inside a networked urban field functions similarly to Internet online communities that share desires, but in this case they share desire about their space, their activities and therefore their actual urban experience and interaction. This attachment emerges between user and system that exchange information by sharing the strata of space and time. The User Interface offers the navigation through information that deals with space, activities and time. This structure allows human interaction to have an impact on physical experience as user interface input is connected with the adjustment of physical space. The access to information is categorized in three levels of navigation and interaction. Firstly, the user receives information through location based navigation. Secondly, the user participates in the system by inputting his or her desires. Thirdly,the user initiates the response of system behavior by triggering new activities. 4.4 INFORMATION DISTRIBUTION Inside a complex system of networking different locations, scales and times, the organization of information is the first crucial subject. In order to connect information with space the basic categories become the foundation of the system. It is not about reducing complexity, but about simplifying the understanding of complexity through setting fundamental driving forces. The selected information categories mix space parameters such as crowd behavior, activity intensity, or programmatic uses (entertainment, commerce, community) with relevant information parameters. Urban, island and cluster choreography of information distribution are based on the above information categories and are applied as group of parameters of specific locators and at the same time as group of parameters of the relationships between the locators themselves. 4.5 PROGRAMMATIC PARAMETERS The parametrization of program is realized through the codification of spatial programmatic uses with selected main parameters. Area of space, motion of users inside that space, sound in that space, use in that space, time factors of that space and the potential of interacting with an interface become distinct as main parameters. The next step is the analysis to sub parameters in order to proceed to measurable spatial qualities. Area concerns the area size, its shape and proportions, and the open, semi-open or closed state of the space. Motion refers to the speed of movement through space and directionality of the crowd. Sound concerns frequency, loudness, duration and rhythm of the environment. Use is categorized to entertainment, commerce, community-sport and infrastructure. Time is analyzed as the duration and time of day of the activity in the space and the time of responsiveness that is needed. Interface involvement is about the adjustability of that space and the potential of offering information to users through an interface. 4.6 FLUID MOVEMENTS AND PULSES Our visual indexing records the past events in memory, but the current view is set to the present moment. By implementing a motion tracking system we are able to translate present movement into visualization with an increased lifespan. As the camera records the motion within the frame, zones of change birth sets of particles in the system who are encoded with behaviors of growth, atrophy, and a finite lifespan. The reactive system allows traces of recent events to be left behind, extending the moment of the past present. The complex flows of bodies through a city carry embedded information within their patterns of movement. Fluid algorithms and particle physics are[used here to simulate the dynamic conditions of the crowd. Seeing crowds as fluids allows for us to conduct digital research into the interactions between certain user types, focusing on the laminar and turbulent flows generated by the system and on the crowd as a whole, adding a layer of understanding to the neighborhood conditions of the agent based crowd dynamic simulations. Parametric adjustment of the conditions of the simulation allows for timevariant conditions to be taken into account and the results studied relative to individual output. Here trajectories and points of attraction set up the simulation where fluid bodies interact according to the implanted values.
111
4.7 DENSITY DRIVEN Fluid algorithms provide a system in which local change in reflected through the global whole. Each voxel [volume pixel] that makes up the digital fluid reads the parameters of its neighbors and adjusts its condition accordingly, generating gradient flows across the fluid container. These fluid motions and their adjustment to local conditions provide a way to translate characteristics of crowds into a system that allows for the simulation to be adapted to drive otherparameters of the design. Here the density values of the voxels are connected to corresponding cells with a discrete range of possible motion. Higher density values correspond to higher crowd densities and cause the cells to open in response, revealing the activation, speed , and trajectories of the users.
5. SPACE (IN) FORMATION 5.1 NEGOTIATING BEHAVIOURS Multi state behaviors emerge from a system in constant negotiation, ad- justing between present and future conditions while progressing away from previous states. This dynamic equilibrium allows for lowlevel changes in the system to have global effects, maintaining a partial memory of past events and allowing for a conversation to occur between the states. 5.2 SPLINAL LOGIC The splinal logic of constructing and merging splines of beams together was developed by creating specific properties that a cellular formation should follow. Bundling is the integral method of creating the infrastructure of the beam, where depth through accumulation is achieved. When a beam needs to split into different directions, the diverting property of the splines will allow the bundles of the beam to follow a specific trajectory. Because of the bundling ability, even if a beam splits into two parts, structural integrity can still be retained due to the redundant density of splines that the beam started with. Twisting in the system was used to create specific directional flexibility on the bundles created. With the twisting property, a vertical bundle can transition to a horizontal bundle that would allow the beam to flex and move in a different way. Depending on structural requirements, a spline can blend into other splines when a lesser amount of support is needed on a certain area, thus, material optimization is achieved. 5.3 CELLULAR SYSTEM A more coherent cellular element was derived based on the splinal logic studies. Following a straight trajectory of lines, the 1CELL unit is formed, its horizontal elements merging with diagonal splines enclosing a space. With the directionality of the cellular design, a simple rule of addition was followed to produce the next three sets of cells. Starting with a single cell, a second one is added at a 45 degree angle on any of its four edges so as to merge the two by sharing similar splines. With the addition of a third and fourth cell, the 3CELL and 4CELL can be derived respectively. To create transitional spaces from enclosed to open both the upper and lower splines of the enclosed cell are merged with the lower splines of an open cellular space. With the use of singular units, deformation can occur in a more local scale that ripples through neighboring elements. The flexibility of the system is seen with the ability of the cells to nest within larger cells, providing for a system of adjustability and easy deployability. Given a specific large cell to fill, the four types of cells can be easily paired up to give the best possible formation for the given space. 5.4 PARAMETRIC RELATIONSHIPS Catia software was used to catalog the static and kinetic programmatic cells that then are applied with localized programmatic criteria to form clusters nested into infrastructural cells. The semi enclosed interior of the resulting space is defined by the placement of floor and canopy cells. Each cell type follows specific rules controlled by relationships that allow for a range of ad- justment to fit the cells to specific formations. The cells stay within a defined range of possible values of dimensional parameters, allowing them to share similar geometries but provide differentiated spatial outputs.
112
5.5 CELLS There are four main types of programmatic cells: _single programmatic cell: a simple structure that is able to define small semi-enclosed spaces within the larger array, it is embedded with kinetic behaviors that allow it to adjust the height and position of its structure to accommodate various configurations; _double programmatic cell: defined as two subtypes, one semi-enclosed and able to adjust by vertical motion alone, the second begins completelyenclosed but is able to open to surrounding spaces with a combination of vertical movement and rotation; _triple programmatic cell: also developed into two subtypes, the first semi-enclosed and able to adjust the upper portion of the cell, while the second is able to perform as a walkable ramped connection to an upper level or as component in a varied landscape; _quattro programmatic cell: entirely enclosed, these larger cells remain static in their structure to allow for more spatially demanding uses to coexist in the clustering formations. 5.6 CANOPY CELLS Where the programmatic cells’ states are determined by the cluster to cluster relationships within the network, the canopy cells respond to their immediate conditions providing a less hierarchical system of response to changing condi- tions. By reading the states of their neighbors as well as the movements and requirements of the fluctuating crowds, single cells are able to adjust their position accordingly, constantly adjusting the conditions and experience of the space below. These variations transfer across the global level of the system by passing information at the local level, resulting in an ability to adjust ambient spatial conditions through the resulting patterns of porosity. The catalog of static and kinetic programmatic cells is applied according to localized criteria to form clusters that act as nodes within the system. The individual cells within these clusters respond to the positioning of their neighbors, allowing the cluster to take on varied formations. These clusters are then nested into the larger infrastructural cells, with the semi enclosed interior of the resulting space defined by the placement of floor and porosity of canopy cells. 5.7 ROBOTIC PROTOTYPES A series of manual and robotic prototypes were developed in order to explore the possible configurations of the kinetic cells in parallel with inverse kinetic digital animations. The digital and physical prototyping processes were explored such that the results were fed back into the corresponding development of the next prototype version. Initial prototypes attempted to replicate the initial behavior and range of movement of the animated typical programmatic double cell. In order to achieve this motion a simple linear pull mechanism was implemented to control the position of the structural arms of the cell, and with a short adjustment a far greater change was achieved. Adjusting the balance between stiffness and suppleness of the members allowed for the prototypes to be fine tuned to desired behaviors. The series of prototypes was very crucial for the development of the research and particularly is the means of moving from the state of an experimental conceptual proposal to the level of realization. The emergence of prototypes that reveal kinetic, interactive and structurally efficient qualities is a crucial step to explore methods and techniques that in direct future and under specializing studies could lead to the technology required to realize such spaces. The experimental prototypes bring the conclusion that from the architectural point of view we are not far at all from building such an intelligent infrastructure. 5.8 PROTOTYPE DOUBLE CELLV.1- CELLV.2 This first manual prototype 2CELLv.1 brought about some interesting de- velopments in the design of the kinetic cells as the performance of the model was now related to the properties of its material. Through this material com- putation a twisting motion was added to the repertoire of kneeling /
113
opening initially proposed through digital animation. The states of the cells are linked to the index of incoming information within the network. As the clusters com- municate the relevant data of time, crowds, desires, program, and weather they determine the current state of the individual cells.
Robotic prototype 2CELLv.2 actuated with an Arduino micro-controller via an interface developed in Max/MAP The supple robotic prototype 2CELLv.2 developed as a way to link the dig- ital indexing of data to the physical manifestation of the cell state. Attraction points within the network are adjusted to the varying activation scenarios, and the localized conditions are then fed to an Arduino micro-controller via an interface developed in Max/MSP. Density within the digital diagram is translated to the position values of the servo motors that take over control of the linear pull, allowing for precise control and relationship of the pair of double cells. This state data is then fed back into the system through communication with the other clusters within the network and at an interface level to distant users within the urban network. 5.9 PROTOTYPE CANOPY V.1 This prototype explored the possible configurations of a single canopy cell and its relation to the overall organization of the prototype array. Using similar materials as the previous cellular prototypes, the motions of the canopy cell are explored in a physical manner in order to bring out certain tendencies in the system to inform further development of the cellular type. Each cell is able to act independently, but the position taken is relative to the neighboring cells, working with the same principles that were used to initially study the fluidmovement of the canopy array. These variations transfer across the global level of the system by passing information at the local level.
6. DISTRIBUTED RESPONSIVE LEISURE 6.1 STRATFORD East London is preparing for an explosion of development in coming years, the time frame of which has been accelerated by London’s successful bid to host the 2012 Olympic Games, whose site sits adjacent to the proposal. Stratford Town Center is a major transportation hub for east London,
114
currently providing connection between northeast England and London, with trains, buses, and the underground converging at Stratford Station, and London City Airport a short DLR ride away. This is about to be expanded as the new International Train Station is constructed, extending Stratford’s reach directly to mainland Europe. 6.2 URBAN NETWORK OF NEGOTIATION AND LEISURE The addition of an urban leisure park at the point of connection between the two Stratfords would augment the existing and proposed development, offering an infrastructure that adapts intelligently to constantly changing scenarios. This intelligent infrastructure is set up as a parametric urban filter, that creates unpredictable connections and follows economic, social and cultural dynamics. A bridge is the continuously responding structure to these changing trends. It adapts, creates space and even takes part in the negotiation between the two Stratfords. Leisure activities are proposed as an urban enhancement of the centralized commerce poles of old and new Stratford. The result of this parametric urbanism is a network that spreads through old and new Stratford. Instead of a centralized mega intervention, a flexible urban network is articulated around the central station of Stratford that connects the two local poles. Urban islands are proposed in strategically important locations of the area, following the logic of an urban networking system that is realized as both a circulatory and digital network. 6.3 CIRCULATION NETWORK The proposed urban network of Stratford is first of all a circulation network. Its islands are located in proximity to each other in order to allow the movement between them. The old area of Stratford (east), the main bridge area, the new Stratford City (west) and the new Olympic site are connected in a common urban environment. The logic of dispersed urban intervention enhances the in between places as well and circulation becomes vital, allowing a degree of freedom to users in contradiction to a centralized urban masterplan. In addition, the placement of the network’s nodes takes in consideration the existing and future main circulation directions, creating a ”bridge” between existing space experience and the proposed adaptive urbanism. 6.4 DIGITAL NETWORK On the other hand, the digital network that is developed concerns more translocality than locality, the way that physical world is connected to the system in terms of information. Therefore, the urban islands are interconnected and the flow of information between them is based on specific information categories derived from crowd behavior, entertainment, commerce and community activities that translate the existing urban activities to the network’s nodes. The digital network apart from the informational characterization of the urban islands and the information distributed between them, extends also to ubiquitous human interfaces that provide to users access and interaction with the system’s networking. ”Interaction, between the users and the pieces of information they handle, may become information itself. It is the automatic translation of these inter- actions that is taken over in the fuzzy interfaces. Accordingly, it is not only the interface graphic design that has to be conceived, but also the design of the structure of information itself”.
6.5 LOCALIZED PROGRAMMATIC HIERARCHY The connection between global information parameters with the various areas involves the specific local conditions that differentiate each zone among the network. The daily urban experience within the network is not a homogeneous one. Therefore the formation of the islands takes into consideration the local conditions and becomes the extension and the augmentation of the existing urban environment. The islands of the proposal are placed in old Stratford (Great Eastern rd, Broadway triangle, High St., West Ham lane, Public Gardens, Vernon rd ), in new Stratford (South Station square, Stratford City
115
park, Retail square, Central spine, Olympic village, West Leyton public space, towards and by Olympic stadium) and in the bridge that connects both Stratford’s (Stratford bridge). An island is also placed on Maryland St. in the neighboring area of Leyton, suggesting the potential of a further expansion of the network into greater London. Each island feeds and interfaces differently with the digital networks pro- ducing its own localized programmatic hierarchy of entertainment, commerce and community activities. 6.6 STRATFORD BRIDGE The Stratford bridge island is the interface of social, economic and cultural forces of old and new Stratford. Additionally, as an architectural proposal it combines all the spatial qualities that could be faced across the urban network. Apart from being a physical connection that facilitates the movement of vast amounts of people, it is a leisure park that hosts entertainment, soft commerce and community activities in a larger scale than the other islands of the proposed network. It also extends organically to the proximal bus and train stations. The programmatic cells are the spatial units that cluster together to host human scale activities. The clusters are then attached to larger infrastructural bridge cells, the synthesis of which forms the urban island.
Elevational view of the bridge that connects new with old Stratford. 6.7 CLUSTER NETWORKING The information distribution is gradated to different spatial scales. Each is- land is itself another network of interconnected clusters of cells. The clusters are linked to each other according to basic information categories (intensity of crowd activity, crowd behavior, programmatic activities), based on interrelations that take into account local and spatial criteria. Proximity, location, programmatic tendencies, scale, proportions are properties that differentiate clusters from each other and build their networking that triggers the performance of the total system further in the human-experience scale. 6.8 SCENARIOS The performance of the system as a consistent multi-scalar network is sim- ulated concerning three different scenarios (carnival, weekend evening and morning rush) while networking triggers bottomup and top-down reactions at the same time. Space behaves in a different way responding to each scenario. Information flow brings adaptation in urban, island, cluster and cell scales. The mechanism involves a continuous interaction between networked spatial organizations: a network of cells generating clusters, a network of clus- ters generating islands, a network of islands generating the
116
urban proposal. Clusters have the ability to adjust to various scenarios as they are formed by programmatic kinetic cells and responsive canopies. The proper response to each scenario is choreographed according to the input parameters of clusters. Cells through their movements allow the change in the spatial environment that facilitates the development of variable scenarios. __Weekend evening at the main clusters of Stratford bridge: the majority of the programmatic cells are activated creating a landscape for urban leisure. The human interface input is high as users have participated in the creation of their spaces. Crowd activity is intensive and entertain- ment with community activities prevails while commerce activity is also present on a smaller scale. The selflearning of the system contributes as weekend activities follow repeating patterns. __Morning rush at the main clusters of Stratford bridge: most of the pro- grammatic cells are in their rest state, few are activated in order to pro- vide a quick morning passage for the crowd. The human interface input is extremely low. Crowd movement is high. The need for commerce activity is present, while entertainment and community activities are ab- sent. The self-learning of the system partly contributes, recognizing ten- dencies in the flux of crowds. __Carnival festival at the main clusters of Stratford bridge: nearly every programmatic cell is activated creating the infrastructure for a large scale event. The human interface input is partly activated as some users trigger discrete local events. Crowd activity and movement are high and mostly entertainment with some community activities prevail while commerce activity is absent. The selflearning of the system also contributes as it responds to the need for large, temporary events.
Space (in)formation in different scenarios. Each island within the urban network is linked together allowing for certain activities to be choreographed among specific groups of islands. Depending on location and input parameters, an island or series of islands can simultaneously provide the same desired activities of a given community. 6.9 STRATFORD INTERFACE The understanding of the urban proposal is communicated through the human interface which gives access to users to receive information about the current state of the network across different scales and in parallel input their information (desires of participating and initiating). The networked system becomes open and flexible to accept as further parameters the users’ active involvement. The user has an overview of scales, spaces and time. The urban network behaves like a living organism that shares different information and experiences with users in time and space.
117
The synopsis of the proposal appears at the interface, as each urban island, cluster and programmatic cell provides the proper information. Navigation through different days of the week and time of the day reveal the flexibility of space and each potential to change. Additionally, information about other users’ initiations and preferences could form collective activities and enhance the performance of a networked parametric urbanism.
7. CONCLUSION The synopsis of the proposal appears at the interface, as each urban island, cluster and programmatic cell provides the proper information. Navigation through different days of the week and time of the day reveal the flexibility of space and each potential to change. Additionally, information about other users’ initiations and preferences could form collective activities and enhance the performance of a networked parametric urbanism. At the intersection of the design systems - spatial, informational, and kinetic - responsive spaces converge as a cultural intervention that extends parametric processes to drive the dynamic performance of the urban. This is orchestrated as a system that allows person to thing interaction and communication from thing to thing while enhancing the experience of human to human interaction.
REFERENCES Crandall, J. (2005). Operational Media C-Theory. Theory, Technical, Culture Deleuze, G. and Quattari, F. (1987). A Thousand Plateaus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press. Frazer, J. (1995). An Evolutionary Architecture. London: AA Press. Levy, P. (1997). Collective intelligence: Man’s emerging World in Cyberspace. New York: Perseus Books. Negroponte, N. (1975). Soft Architecture Machines. MIT Press.
118
VIRTUALITY IN IMAGES – »GOD IS IN THE DETAILS.« OR: THE FILING BOX ANSWERS. 1. ABSTRACT The title of this talk refers to two eminent scholars. The first quote stems from Aby Warburg1 ,
who was born in 1866 and died in 1929, the son of the owner of a bank, who sold his status and his rights of a firstborn to his brother like Esau did at one time. The brother is called Max and not Jakob, and the price was not a plate of lentils but every book he, Aby, wanted. The deal became dearer than Max thought when he took over the Warburg Bank and promised to buy the desired litereature. Aby Warburg became one of the first and one of the most famous Kulturwissenschaftler, scholar in the field of cultural studies, and he built the Kulturwissenschaftliche Bibliothek Warburg in Hamburg,who‘s stock emigrated 1933 to the Warburg Institute in London. Warburg deserves to be called the inventor of iconology. Art history defines it as »description and classification of image content aiming to understand the significance of this content.« 2. In a way this is the key problem of all scientific use of images. My other patron is Niklas Luhmann3,
one of the greatest sons of the town Lüneburg, born 1927, died 1998, sociologist and constructor of the modern systems theory, who startet his »filing box because of the simple consideration that« his »memory was bad« already at the age of 254. He communicated with his filing box, it was an eminent
1
commons.wikimedia.org/ wiki/Image:Aby_Warburg.jpg
2
Warburg, Aby: Nachwort des Herausgebers. In: Wuttke, Dieter (Hrsg.): Ausgewählte Schriften und Würdigungen. BadenBaden, 1980, S. 601 ff. »Beschreibung und Klassifikation von Bildinhalten mit dem Ziel, die Bedeutung der Inhalte zu verstehen« Transl. MW. 3
theyazzzz.blogspot.com/ 2008/03/down-with-him.html, eine Site, die viel Unsinn über Luhmann enthält (»Down with him!«, aber eben auch dieses Bild in vergleichsweise anständiger Auflösung.
4
Ebd., S. 33.
source of his productivity and he treated it so well that in the end it answered him, surprised him and gave back what never was put into it by him. In other words: the concern is that of media of knowledge. At the shoulders of these two giants stands my humble contribution, the attempt to combine the sharp eyed gaze at image details with a comfortable filing box by an implementation in software. I will tell you about the methodology and the outcome of it, of experiences and of further plans.
2. GOD IS IN THE DETAILS – A VIRTUAL SHARP EYED GAZE Images have a poor scientific reputation. They count only little if exact conclusions have to be drawn. Since modern times precise thinking is done with text, because images are reigned by the category of similarity, which is, secondo Foucault, since the beginning of the 17th century »not any more a sort of knowledge but rather the occasion of mistake and aberration, the danger one gets into when not thoroughly testing the badly illuminated place of confusion.«5 In spite of this, great thinkers also after the 17th century have thought in images. My favourite example is the one by Charles Darvin 6,
who at December 7th 1856 jottet into his notebook7 »I think«, to express what came to his mind by means of an image, with a diagram:
It took until now to understand why to think in images is not offensive but fruitful.
5
Foucault, M. (1991) Suhrkamp, Frankfurt am Main. Die Ordnung der Dinge. S. 83.
6
http://www.darwin.ie/wordpress/wp-content/uploads/2008/02/darwin.jpg
7
http://darwin-online.org.uk/content/frameset?viewtype=side&itemID= CUL-DAR121.-&pageseq=38 aus den »Complete Work of Charles Darwin Online.
Gottfried Boehm wrote: »the notion ›image‹ concerns a different type of thinking that is capable to clarify the for long underestimated cognitive techniques that do not use verbal representations.« 8 »It is an ›iconic difference‹ with which significance can be expressed without reverting to linguistic models, e. g. syntax or rhethoric figures. Because the intelligence of images lies in their respective visual order.« I‘d like to point to this extra-liguistic aspect of images, to the fact that images and their interrelations are not totally exhausted by speech and for that very reason could not be explained and described verbally without leaving a residuum. I will not stop at principles but will show you very concretely the digital media with which this ›iconic difference‹ is turned into media technology. In very much the same way as language uses words and notions to form reasonable propositions, a thinking at and in images is done using image atoms, signifying entities. Contrary to language it is all but clear which these are.9 Language has brought about the dictionary. Image atoms have to be discovered, negociated, described every time anew. Aby Warburg was deeply convinced that the cultural historic significance of images exactly lies in these image atoms and their interrelations. At the 25th of November 1925 he found the following word for this: »Der liebe Gott steckt im Detail.« 10 God is in the details.
With the full seriousness of scientific endevour he stated: »Wir suchen unsere Ignoranz auf und schlagen die, wo wir sie finden.« 11 We search for our ignorance and beat it where we find it.
One of his endevours I will bring back to your memory. Warburgs methodology of cultural historic analysis of image motifs, his iconography and iconology, as we would call it nowadays, traced the path of tradition of image contents from the antiques up to now. A famous example of this technique of thorough tracing is written down in his paper about the month frescos in the Palazzo Schifanoia in Ferrara. It is a veritable riddle, which he solved with the acribic exactness of an investigator. The question is: who is that man?
8
Boehm, Gottfried: Iconic Turn. Ein Brief. In: Belting, Hans (Hrsg.): Bilderfragen. München : Fink Verlag, 2007, S. 27-36, S. 27. Translation by the author. 9
Siehe dazu: Warnke, Martin: Bilder und Worte. In: Ernst, Wolfgang ; Heidenreich, Stefan ; Holl, Ute (Hrsg.): Suchbilder. Berlin : Kulturverlag Kadmos, 2003, S. 57-60.
10
Faksimiliert in Warburg, Aby: Nachwort des Herausgebers. In: Wuttke, Dieter (Hrsg.): Ausgewählte Schriften und Würdigungen. Baden-Baden, 1980, S. 619. 11
Wie FN 12.
The answer is: it is a certain Perseus, who changed his appearance significantly, which, though, could not irritate the Warburgian serendipity. Here comes the chain of evidence: The most important information comes from the context of the quested: astrology. The figure is part of the month march, so image tradition of the zodiacal sign, the Aries, helped a lot to trace down the personnel under suspicion. Perseus from the greek sky of fixed stars,
who holds in one hand Gorgo‘s head and the harp, the scimitar, in the other. He becomes the egyptian first dean of Aries, the one who rules the first ten days, from deka, ten:
He carries an egyption double axe.
He mutates more and more to an arabic decaptitator and hangman, immediately afte having done his duty. We recollect: he cut Gorgo‘s head.
A spanish lapidarium showed him that way, the double axe is still there, black skin is added:
Considering an indian tradition, that reads: »The indians say that in this dean a black man rises with read eyes, of big figure, strong courage und great attitude; he wears a big white costume he girdled with a rope […].«12 we get the ethiopian hangman, using his rope as a belt, showing this service weapon to everybody:
Since all the relevant literature was known to the principle of the Palazzo Schifanoia, we have a complete chain of evidence. The person is convicted of being Perseus. Aby Warburg also argued with images in his Mnemosyne-Atlas, and this is indeed utterly necessary to be able to follow the chain of reasoning in his paper. He used arrangements of images, photographs pinned to black canvasses, to relate images from distant times and places.
12
Boll, Franz: Sphaera. Leipzig 1903. S. 497.
Wordless, image next to image, his iconology begins to blossom. Horst Bredekamp und Michael Diers stressed that the significance of images in a process of civilisation lies somewhere between magic and logos. 13 Michaud calls it »a mute language, freed from the constraints of discourse.« 14 Here we have the Warburg library, in the foreground a row of such plates of the Mnemosyne Atlas:
The frames served as means and media of reasoning and of presentation. They were relativlely easy to carry about but set limits to arbitrary recombination of the contents. Warburg wrote in his scientific diary 15: »The re-grouping of the photo-plates is tedious« »mass displacement within the photo plates.« »Pushing around of frames with Freund.« »Difficulty: the placement of Duccio« »The arrangement of plates in the hall causes unforeseen inner difficulties« »Begun to cut out all the gods« It must have been extremely difficult to relate image details with one another, despite the fact that exactly this was of such an eminent importance to Warburg. Peter van Huisstede reports of chains of argumentation like filaments consisting of 15 or 20 images. Whether Warburg actually used a ball of woolen thread is unknown to me, but I am convinced he would have gone a similar way we did. I show you now.
3. HYPERIMAGE: WORKING CLOSE TO THE DIGITAL IMAGE Our software is a digital filing box for image details. References between these details can be coded without verbalisation. Its name is HyperImage, it is a collaboration between the Humboldt University in Berlin and the Leuphana University of Lüneburg, the german ministery for research and education gives the money. We are in our third and last year of operation, four people work for it, I am the head of the Lüneburg part. Images are uploaded from repositories to the editor, that is developed in Berlin. There these images are put into groups, metadata are added, image details are linked to one another. With light tables arrangements of the Warburg frame type can be done, albeit a bit more comfortable. The images can be referred to in multiple contexts and interrelationships at the same time, what Warburg definitely would have liked. And because of everything being digital, image indexes and concordances are compiled automatically. All this results in Flash based web pages, to be put in operation without further ado on any conventional server or even from a local drive.
13
Ebd., S. 9.
14
Michaud, P.-A. Trivium 1-2008 »Zwischenreich«. Abs. 30.
15
Zitiert nach Huisstede (1955) S. 147 ff.
Pilot users test our application all the time for their own research purposes. A stable final version will be there at the end of the year. I now show some of the results. Example 1: Giotto‘s reception of the antiquity Very much in Warburgian attitude a researchers group around Peter Seiler at the department of art history at the Humboldt University investigates motif diffusion from the antiquity to the italian renaissance. E. g. in a fresco by Giotto from the Scrovegni chapel in Padua typical gestures could be traced down to antique sarcophagi. So have a look at St. Joachim meeting the shepards:
Selecting his figure with the mouse arrives at the antique model:
Using the lighttable is very much as if Warburg did one of his frames:
There are lots of pictorial references from the renaissance to antiquity in the lighttable »fight motif« around the Bethlehem Child Murder from Pisano, Giotto and Duccio:
This digital version of the Mnemosyne Atlas allows for a placement of content into multiple contexts, zooming and clipping. No pushing around of frames, no cutting out of gods any more.
The references are indexed, you can backtrack to the origins of a link. But also the wisdom of language has been exploited. A text index certainly is a must. E. g. look at all the occurences of »arm«:
Example 2: French Revolution Graphics A group of art historians around Hubertus Kohle at the Ludwigs-Maximilians-Universität Munich investigates the diffusion and transformation of motifs in daily produced caricatures and graphics during the time immediately after the french revolution. Almost unrecognizably a semantic shift occurs: the tree of liberty gradually transforms to a cut-off head on a lance, day by day:
… Starting at the end, the image index shows not progression but genesis:
But how do these results come into action? I will now show you how tu use the editor and the reader software.
Lets begin with the editor. It is programmed in Java, as platform independent open source software, finished at the end of the year. After authentification, images are uploaded from the local drive or any repository with the proper interface. The material is grouped, metadata are added, decomposed into image atoms, these are linked together, the whole stuff exported as an XML file. This is how it looks like to mark a virtual region as the area of observation and investigation and to link it to another region as the later link target:
The overall structure roughly lookes like this:
External repositories have to have a WSDL-interface to be connected. Such an interface could be done in one or tweo week‘s programming effort. The editor produces an XML file that is interpreted by a Flash based reader. A Flash plugin to the browser suffices, the material can be delivered by any web server or locally from a disc. The Warburg example from the Schifanoia Palazzo looks like this in the reader:
In a prepared lighttable an arrangement of images looks as follws:
This is a technical realisation of what Johann Gottfried von Herder said like this: »All notions hang in the chain of truth at one another; the tinyist may not only serve the biggest, but could itself become undispensible.«16 Or: God is in the details.
4. WHEN THE FILING BOX ANSWERS Arriving at the last section of the presentation, at the filing box, that, if taken care of properly, answers its operator. Luhmann‘s biggest problem with the box obviously was to correctly re-place notes to their proper location. At least this is something computer technology has freed us from. Storage and retrieval are the 16
Zitiert nach Warburg: Ausgewählte Schriften und Würdigungen, S. 604.
easiest duties for computers. But: how does the filing box become productive intellectually? This stems from the same sources as the difficulty putting notes back: from complexity. The need for a filing box always evolves from the problem of complexity to have much more than could be overseen. Computers help to govern the masses, but to select the relevant, a human brain is necessary. What, then, is a good filing box? Does its quality come from the wisdom of the individual notes? Somebody trained in ontology might think so. Luhman, as often, finds a totally different approach: »Contrary to the structure of updatable options of references, the importance of the concretely noted is small. […] The communication with the filing box only becames fruitful at a higher level of generalisation, at the level of communicatively relating the relations.« 17 To put it differently: by cross referencing and the meshup that follows from it. The net is the filing box. By cross referencing the spider like net system 18 of entries emerges. "Every note is an element that gains its qualities only by virtue of the net of reference and back reference in the system."19 It is not just the chain as Herder thought, in postmodern times it is the net of atoms of knowledge backing up one another mutually. I here and now risc to claim that a good filing box is a scale free network in the Barabasi sense: growing by preferrential attachment. But does our filing box called Hyperimage actually give answers to their operators? Some years of use will certainly be necessary before surprising results occur. As Luhmann states, that his »filing box on occasions provides for combinatorial possibilities that never have been planned, thought of, prepared for in advance.“ 20 I asked our pilot users, and the most exciting answer came from the biologists and their biodiversity project at the Museum für Naturkunde in Berlin. Prof. Hannelore Hoch and her group searches for the inner workings of evolution of species and can report that use of images, in this case e. g. tomography data, maps and localisations, brought about insights that have not been possible before and without this kind of media. The CT data like these were impossible to publish in a paper magazine:
But additionally there are also new insights concerning the evolutionary status of species that could be gained by locating them on maps and backtracking the findings to their origins, that is, by using the
17
Luhmann, Niklas. “Kommunikation mit Zettelkästen – Ein Erfahrungsbericht.” In Öffentliche Meinung und sozialer Wandel, edited by Horst Baier, Hans Mathias Kepplinger, and Kurt Reumann, 222--28. Opladen: Westdeutscher Verlag, 1981. S. 227. 18 Luhmann, N. (2000) Biographie, Attitüden, Zettelkasten In Niklas Luhmann – Short Cuts. P. Gente, H. Paris, and M. Weinmann, eds. Zweitausendeins, Frankfurt/Main, S. 26. 19
Luhmann (1981), S. 225.
20 A.
a. O., S. 226.
image index. By this way it became clear that one species always occurs »sympatrically«, that is, together with an other one. This suggests evolutionary dependencies that were not known beforehand.
5. LAST QUESTIONS There are two of them: what comes after, and a crucial question, or, following Faust: the so called Gretchenfrage. First one: the after. The Deutsche Forschungsgemeinschaft will finance to build HyperImage into prometheus, the distributed imaga database for research and academic education. This hopefully will bring the required masses of pictorial references to watch the net grow. Now the Gretchenfrage. Aby Warburg, finding God in the details, obviously stayed with the all mighty. But what about the second giant on who‘s shoulders stands my second foot? Talking to Alexander Kluge, for which character from Faust Luhman would opt as the most interesting, Luhman answered: »Probably for Mephistopheles. My part is alway with the devil. He discriminates the sharpest and sees the most.« And beeing asked about his major character attribute, and whether it may be curiosity, Luhmann, always ready for a surprise, answered: »Stubbornness«, in german: Bockigkeit.21 I don‘t know what your interpretation of this confusing statement is, but I thank you for your attention anyway.
21
Luhmann, N., and A. Kluge (2004) Vorsicht vor zu raschem Verstehen. In: Warum haben Sie keinen Fernseher, Herr Luhmann? – Letzte Gespräche mit Niklas Luhmann. W. Hagen, ed. Kadmos, Berlin, S. 77.
DIGITAL DEATH
Stacey Pitsillides1, Savvas Katsikides2, Martin Conreen1 1
Design Department, Goldsmiths, University of London
2
Department of Social & Political Sciences, University of Cyprus
Correspondence author: Stacey.Pitsillides@gmail.com
ABSTRACT In this paper we introduce the concept of Digital Death, a topic we believe has not been discussed and researched in the literature previously, and argue that it is worthy of further research. We propose three different dimensions of digital death, analyse each one, and propose a number of representative applications which could be designed from a better understanding of digital death. We develop our theory and evaluate our ideas through background literature survey, discussions and questionnaires.
1. INTRODUCTION ‘Death’, a five letter word, is one of the most feared words in the English dictionary. What makes this word strike dread into people’s hearts? How can this collection of letters be responsible for so much drama? Death, the permanent termination of a living being, is an emotional and potentially disturbing topic. People prefer to ignore this word; the mere mention of it transforming social situations into uncomfortable silences. Yet, everybody dies. Perhaps this is what has prevented the formal study of ‘digital death’ in the past. Digital death appears to be a topic hardly discussed in the literature. A search in Google Scholar of the keyword ‘Digital Death’ (appearing together) has produced almost zero results. ‘Results 1 - 10 of about 44 for "digital death". (0.04 seconds)’ and only one article (Lucenet 2002) appears to refer to the term in a similar, but opposite, context (preservation of digital heritage)1. In an increasingly digital world, the on-line information by far exceeds the information in print2. The growing ever presence of the Internet provides for a phenomenal exchange of information on a global scale. Furthermore social networks and virtual worlds are becoming part of the everyday life, especially of the youth and business. So, in this digital world, it appears that digital death should form a part of our study. In this paper we consider digital death, and in what ways it relates to death in the physical world and vice versa? Digital death can be seen as either the death of a living being and the way it affects the digital world, or the death of a digital object and the way it affects a living being. Three dimensions regarding digital death were identified. The first dimension (D1) deals with the death of a living being. The death of a human begs the question: what happens to the mass of digital information left behind? Are there parts of the information space one would like to ‘leave’ to loved ones, for example photos or financial information. In addition, one must question whether there are any parts of the information space that one would want to ‘die’ with them. An equally important aspect of human death is the grieving process and whether the ritual of death is more important, or as
1
Lusenet Yola de, (2002), Preservation of digital heritage, Draft discussion paper prepared for UNESCO, European Commission on Preservation and Access, March 2002. (http://www.knaw.nl/ecpa/PUBL/unesco.html)
2
It has been estimated that the world produces between 1 and 2 exabytes of unique information per year, or 1018 bytes (an exabyte is a billion gigabytes). Printed documents of all kinds comprise only .003% of the total (Lyman et al 2000).
131
important, as the dead body. If this is the case then can virtualization of death rituals assist in the grieving process? One can perhaps get a feeling of this process by wondering into a graveyard in Second Life.3 The second dimension (D2) deals with the death of digital information. The death of information itself is also to be considered when your digital information dies before you. For example, the death of a person’s personal computer or hard disk. How does this ‘loss’ of a personal computer or hard disk affect people? This directly relates to how much information was lost and to how important and/or personal the information was. Another form of ‘information death’ is when a system progresses or technology advances and your information is left in a format that cannot be read, for example the move from floppy disk to CD. This information is then lost or ‘dead.’ Note that the preservation of digital material is a current worldwide concern4, 5. The third dimension (D3) deals with immortality of digital information and the need to engineer its death. Information can be immortal, as anything you write in the virtual world remains. If it remains in circulation, your ‘bits’ will remain forever. However, this can also cause problems as there are an increasing number of people placing information online4, 5 every day and this information remains forever, even after someone has died. If this trend continues we will soon be buried in graveyards of ‘dead’ personal information. And we are only at the very beginning of the digital era. Given these observations, the objective of this paper is to start understanding digital death, to introduce ideas which deal with the different forms of digital death and it’s after effects, including death rituals, and assess their validity through a series of questionnaires, before offering some conclusions.
2. THEORETICAL BACKGROUND This paper introduces and discusses a number of issues related to digital death. Furthermore, for a number of important issues it poses questions, which are of interest in a study of digital death. 2.1 Atoms and bits In order to understand digital death, we must first understand the digital world. The digital world is made up of bits. Our aim was to understand why we convert from atoms to bits and in doing so prove that as a researcher there is a large scope for considering death and rituals of death in the digital world. “A bit is a binary digit, the smallest increment of data on a computer. A bit can hold only one of two values: 0 or 1, corresponding to the electrical values of off or on, respectively.”6 A ‘bit’ is a coding form, an abstract term, we have created to aid our transformation of real world data into computer ‘form’ for computer storage, processing, and transfer of information. The coded information is processed, packaged, and travels within a virtual environment. Within the paper, we use the term ‘bit’ to describe the transition of material products, atoms, into informational patterns, digital products, bits. Conversion to bits allows products and rituals to acquire the qualities of digital informational patterns (Lucky 1994): “bits weigh nothing, occupy no space, obey no physical law, can be created spontaneously from nothingness, and can be endlessly replicated.” This means that the product (bit), through an internet connection, can be accessed from anywhere at any time. You need not travel to your product (bit); you need not carry your product (bit) around with you. Your product (bit) will not decay or waste away, but remain perfect without needing to be tended to. The product (bit) “can be either infinitely malleable or resolutely indestructible,”
3 4
Linden Lab, Second Life. http://secondlife.com/ King A., (2009), Average Web Page Size Triples Since 2003. Last modified: January 09, 2009.
http://www.websiteoptimization.com/speed/tweak/average-web-page/ 5
Lyman P., Varian H. R., Dunn J., Strygin A., Swearingen K., (2000), Project: How Much Information? 2000, http://www2.sims.berkeley.edu/research/projects/how-much-info/
6
University Information Technology Services. What are bits, bytes, and other units of measure for digital information? Last modified on July 07, 2008 http://kb.iu.edu/data/ackw.html
132
This is the product (bit)’s best and worst quality because if you want immortality, then a bit is forever. However when you want to get rid of your product (bit) or perhaps more poignantly your information (bit) the obvious ways of human destruction do not work. You cannot burn, drown or blow up a bit. The only thing you can do is write over your bit with a lot of other bits and even this is not fool proof7 (a company, Data Storage Media Destruction Services, calls this service: ‘computer shredder’). Bits can always be recovered. As Lucky (1994) states “When the world crumbles, the bits will still be there.” However, the preservation of digital material in a form that could be recognisable by human beings is also a challenge, as evidenced by Lusenet (2002) in a UNESCO discussion paper on digital preservation. While researching information theory, we began to consider how this information works within a system, particularly how communication systems and virtual worlds have developed digitally. By researching how the increase in affordable communication systems has affected us socially, we have begun to consider how this effects or creates a need for studying digital death. 2.2 Virtuality and Spirituality in the Virtual World As a virtual human (bit) you can be anywhere in the world in milliseconds. This means that you can have virtual colleagues and be part of a virtual team (Lipnack et al 1997). People at the top of their field, from all over the world can now collaborate effortlessly (Panteli et al 2008). As a human (bit) you can attend events and parties virtually, as for example in Second Life. Taking this a step further, you can even be virtually present at a funeral for which you would otherwise be unable to attend. Perhaps this is the essence of Baudirillard’s (1988) statement: “we no longer partake of the drama of alienation but are in the ecstasy of communication.” As we move into new forms of cheap digital communication, such as Skype™ or e-mail, and different forms of social networks are built, a rich virtual form of interaction has become possible. Essentially, this means we can communicate as often as we want with people who live on the other side of the world and we can do this without ever having to leave our homes. “The entire communication system has passed from a complex syntactic structure of language to a binary system of question/answer signals” (Baudirillard 1993). To research virtuality we immersed ourselves in the Second Life community, creating a new identity, ‘Luma Ashdene’. As an avatar we began to explore and identify how the virtual and material world differed, and question whether our feelings are muted or extended in the virtual world. When we interact within a virtual space we do so without our senses. The loss of touch being perhaps the most poignant when interacting with people; these senses however are replaced by imagination and fantasy. “At the moment that touching loses its sensory, sensual value for us… it is possible that it might once more become the schema of a universe of communication” (Baudirillard 1993). We began to question how abstract concepts like spirituality translate from a physical to virtual space. As an avatar in Second life we visited churches, mosques and synagogues. We began to investigate spirituality and question whether spirituality exists in the virtual world. An example of this experimentation is captured in the following statement by Robinson (2007) of the UU Church of Second Life. She says on the subject: “there is a spirituality of good conversation and real connection with people, and that spirituality is not in the least dependent on whether the connection happens in person, by letter, or by playing with avatars in virtual reality”8. This shows that despite the complexity of the system we are acting within as human ‘bits’, or avatars, we still act human. In the virtual space, our “connection” is still person to person. The main ways Second Life and other immersive environments or RPG’s (role playing games) differ from Skype, e-mail, or msn is that they add an environment to the conversation (see Figure 1). This allows you to explore, experience, and interact together rather than to simply ‘chat.’
7 Data Storage Media Destruction Services Copyright © 1998-2007 http://www.salvagedata.com/data-destrution/ 8 Robinson, C. (2007), Spirituality in Second Life, Wednesday, February 07, 2007 http://iminister.blogspot.com/2007/02/spirituality-in-second-life.html
133
Figure 1.Avatar in the praying position in a virtual graveyard in Second Life.
So one must ask: is death a part of virtuality? If death is integrated into virtual spaces, then does the virtual representation of death aid in supporting people and helping them deal with death in the real or virtual world, or does it leave people feeling a greater sense of isolation? 2.3 De-socialization of Death This leads us to question: can digitalizing death aid in re-socializing death or will it simply create further de-socialization? To begin to answer this question we began researching how the birth of clinical medicine and hospital-based treatment in the 1900s de-socialized death by removing it from the home and making it scientific rather than spiritual. Our study aims to also analyze whether technology, which in the past created de-socialization, can now play a role in creating a space for a more communal form of grieving. As stated by Nikias (2008), “Humankind's status as social animals can never change; no amount of technology, no amount of virtual reality, can change the fact that humans live in community, and we live for community”9. The hospitalization of death meant that death was taken out of the community and treated as something abnormal or taboo. Society began to treat death as a medical failure, your doctor or nurse replacing the priest or family member present during a person’s last moments. Even the Hippocratic oath, which we base our perception of medicine on, determines that people are to be treated as sick ‘bodies’ who needed to be cured rather than real people, with spiritual and social needs. This has made “(death…) a technical phenomenon. Dying is no longer seen as a spiritual transition but as a medical condition.”10. Baudrillard (1993) states “We have de-socialised death by overturning bio-anthropological laws, by according it to the immunity of science and by making it autonomous, as individual fatality… [However] death… is a social relation… its definition is social”. As we move past the early 1900s, death became professionalized “the roles of the group of woman who were recognised as ‘qualified’ to lay out a corpse and of the village joiner in making the coffin had been suspended by the funeral director” (Monroe et al 1980). In the past you were generally born and died in the same town or village. The ritual of death therefore was centred on the body and burial site, involving community grieving and support. However as families spread out across the world, it becomes increasingly difficult for every family member to be there during the death, funeral, mourning period, and anniversary of death. This means that even if people are able to come for the funeral ‘event’ a large proportion of the grieving is done in isolation. “Grief is slow. It’s really a dinosaur in modern life. You can get a meal in three minutes from a fast food place, in a minute you can get any information you want from the internet, but pregnancy and grief still take a long time.” (Hlamish et al 2007)
9 Nikias, C. L. Max. (2008), University: What won’t change. Posted January 30, 2008. http://www.huffingtonpost.com/c-lmax-nikias/universities-what-wont_b_83971.html 10 Anonymous. The Medicalization of Death Created: 18th January 2002 http://www.bbc.co.uk/dna/h2g2/A682922
134
With families becoming more and more spread out across the globe, some services have been created to allow long distance families the opportunity to grieve together virtually, even if they cannot attend the funeral in person. A funeral home in Northern Island (Pachal 2007) offers “online streaming of funerals… Passwords are required to access specific streams, and the files are streams not downloads so there’s no archiving or saving”11.
Figure 2.Example of a virtual presence during a funeral.
However our reflections do not answer whether death can or should be digitalized. Currently death is centred on the physical world; the decay of our body is a physical process. It’s burning or burying, is a material process. The ritual of death is linked to a place of significance, where the body/ashes remain. Death is a cultural, social, personal and spiritual experience. Therefore when researching a project that deals with digital death we must consider the fact that “because they have bodies, books and humans have something to lose if they are regarded solely as informational patterns” (Hayles 1999). This is why our study aims to go beyond seeing digital death as a mere collection of informational patterns. It seeks to use the systems of spirituality and social behaviour which exist within the digital world to create design concepts which deal with digital death, as a “social relation.” (Baudrillard 1993). 2.4 An Evaluation of Non-Site Specific Mourning Social networks such as Facebook, MySpace, Tribe.net and many others, provide a space in which an individual’s home page can be transformed into a memorial site with collaborative grieving and sharing. Facebook is a good example of this as it “enables the sharing of various forms of media… an expansive plethora of homemade videos, photographs, and shared news articles that [could] serve to commemorate and preserve the deceased.”12 MyDeathSpace is a community which is linked to My Space but not affiliated with them. The website gives members of MySpace.com information about deaths within their community. It is an archive of dead people’s My Space accounts, a virtual graveyard. On the site it states that it “is an archival site, containing news articles, online obituaries, and other publicly available information”13. It appears that the ritual of death and death itself is beginning to penetrate even the virtual world and that online communities are finding ways to mourn their virtual acquaintances. In the online gaming community, players occasionally organize an “online funeral when a member has died in real life, for example in Batteground Europe/WWII online”14. The death of a fellow player highlights to users that online friends exist not only virtually, but also physically. When it is not 11 Pachal P., Mourning goes digital: Irish undertaker introduces funeral streaming Posted on 11:55 AM ON 03/18/07 http://dvice.com/archives/2007/03/mourning_goes_digital_irish_un.php 12 Ryan J. (Wesleyan University), Online Social Networks As Vehicles of Individual and Collective Remembrance. http://www.jennyryan.net/research/remembrance.htm 13 Patterson M. (creator.), Welcome to MyDeathSpace.com Launched on Jan 2006 http://mydeathspace.com/ 14 Baudoin T., How to Deal with Death Online Death on the internet. http://www.mediamatic.net/page/48756/en
135
possible to attend a funeral service in real life, attending an online ceremony may be an appropriate way to grieve. 2.5 What Happens when Virtual Friends Die? Say you meet someone online, start chatting, e-mailing or gaming together. You do this for say two years, then all communication stops. What do you assume? Do you assume that person has simply lost interest and found a new hobby or do you assume that they are dead? There are various websites which offer you the opportunity “to search government death records very easily. They vary in what kind and extent of information you are able to extract from them. They are only as good as their database after all”15. However in order to use these databases, one first has to consider the possibility that this person is dead, before actively ‘searching’ for them. Today we have virtual acquaintances, virtual colleagues and even virtual friends. If they die, how are we to be informed? Do we have a right to be informed? Can a virtual friend (bit) be as close as a friend (atom)? Are family members aware of all your virtual friends? Richard Dawkins and Douglas Adams friendship consisted mainly of e-mailing each other. In the ‘Salmon of Doubt’ he wrote about his shock when being informed of Douglas Adams death “Log into e-mail as usual. The usual blue bold headings …mostly junk… The name Douglas catches my eye and I smile…Then I do the classic double-take, back up the screen. What did the heading actually say? Douglas Adams died of a heart attack…” (Adams 2002). 2.6 Related projects There are various design projects which influenced our theories on ‘digital death.’ These projects discuss the topic of death within a virtual context, they discuss the death of a virtual character, the attachment of the site-specific grave to a non-site specific memorial and the role digital data plays when we remember the deceased. Marc Owens – Virtual Death Row16, considers the departure, or closing of ones account in Second Life, to be similar to dying. Owens created an in-world company SABRE & MACE which offered virtual characters a chance to experience death, as a way of detaching from their virtual persona. Elliott Malkin - Cemetery 2.017, is an electronic device which connects the burial site to the persons surviving internet presence, including online memorials and tributes. Michele Gauler – Digital Remains18, is concerned with the role data plays when we remember the deceased. Access keys allow us to remotely log on to the digital remains of a person. This is accomplished through data tags and metadata; search algorithms aid us in finding relevant information.
3. RESEARCH METHODOLOGY Our working hypothesis concerns three different dimensions of digital death which we aim to validate through qualitative research results. If more enhancement of the digital world is expected, then the symbolic digital death and its structures, will gain more social importance. In order to organize our research plan and give sufficient explanation for the phenomenon under examination, we have begun to collect key informant’s knowledge and then test it empirically. Thus we began our investigation by having conversations about death, particularly digital death with a range of people our theory and knowledge of the subject was broadened by these encounters. These people were specifically selected for their expertise. By having conversations with a funeral director, 15
How Do I Find Dead People? How do I find out if someone is dead without causing distress or embarrassment. Copyright 2005-2008 http://www.howdo-i.com/backgroundcheck/howdoifinddeadpeople.php 16 http://www.vintfalken.com/marc-owens-put-yourself-on-virtual-death-row/ 17 http://www.dziga.com/hyman-victor/ 18 http://www.we-make-money-not-art.com/archives/2006/07/michele-gaulers.php
136
palliative care nurse, two bereaved partners and an expert in setting up virtual teams we was able to gain a good overview of this topic through a range of disciplines. Through this process we were able to access that this was a topic none of these professionals has encountered before. It also allowed us to see through talking to bereaved partners that the raw issues that we are analysing have impacted both their lives and bereavement process. Therefore when we chose a range of candidates for interview we were looking for people within various fields who would be impacted if social systems within the digital world were to be re-designed to include digital death. We were looking as much for their emotional reaction to the topic as their answers to the questions. Therefore we chose to begin by posing various simple personal questions about how they view themselves and their evolvement in the digital world, for example “Is there any part of your digital self you would like to save for loved ones?” building up to more complex questions asking them to consider new systems of planning, for digital death, such as “Would you consider making a virtual will for your digital self?” A group debate between ten candidates of various ages and disciplines was also initiated. The same questions were posed to this group however in this situation, both reaction and debate was encouraged. This friction allowed for more complex issues to be discussed and debated upon, such as whether it is ever acceptable for written data to be completely ‘deleted’ from the system and if this data is to be archived who owns or controls said archive. Through first hand research over a small cross-section of around sixteen professionals in varying fields, we have gained a deeper understanding of the various professions which would be involved in creating a discipline which would deal with digital death. We intend to increase both the size and breadth of this sample in future work. 3.1 Conversations Funeral Director – We were given an overview of the job of a funeral director. It was related to us that people’s first instinct is to get rid of the body. They then consider what to do with the persons clothes and possessions. The importance of a funeral director forming personal relations with their clients was highlighted: “if someone is going to bury your mother, you want to know that you can trust them.” We discussed the ‘fashion’ surrounding death that allows products such as urns or coffins to have styles which go in and out of vogue. She recalled the story of man who was dying and was organizing his own funeral. One day he told her how he had smashed his hard drive. He said it was important to destroy his computer before he died, as it had personal pictures and files on it that would hurt people if they were found. Palliative care nurse – Cancer Patients and Friends - With a palliative care nurse we discussed home care. We talked about why it is beneficial to both patient and family, if patients die at home. It is a movement which has been created to combat de-socialization, by allowing death to reclaim its rightful place at home and in the community. However this cannot remove entirely the taboo society has placed on death. We were told that people imagine a ‘good’ death to be like falling asleep. Therefore palliative sedation has become an approved method of care at the end of life if the patient is suffering and symptoms are unable to be controlled. This is not always for health reasons but it can be because the patient and or family have existential suffering expecting a peaceful and controlled death. Bereaved partner 1 – A woman who had recently lost her partner and was left with his personal computer. We discussed how she had to e-mail her partners death certificate to every online company he was signed on to. She also had to mail the death certificate to every government office. She was appalled at the isolation she felt and the fact that there were no services offered to help her deal with this process. She described how it felt to close down all these accounts, almost like losing another part of that person. She decided to keep up his gallery subscription for a year but doesn’t know what she’s going to do next year. She also keeps receiving new e-mails with the heading, hello ‘John’ from various mailing lists and other groups which she says is always a shock. Bereaved partner 2 – This man has come to Cyprus to bring the ashes of his partner to her father who could not attend the funeral. Her mother and father are divorced therefore her ashes have been shared between them. He confirmed that had he known of a service which would allow her father to view the 137
funeral as a live ‘webcast,’ they would have used it. He stated that her father is finding it very difficult to accept her death as he feels he has received no closure. Senior Lecturer in Information Systems, specialist in virtual teams and organisations discussed how she sets up virtual teams. She said she was pleased that someone was looking into the area of digital death, but was unwilling to think about or discuss the death of someone in a virtual team she had set up. 3.2 Interviews Silverax Greenwood is the creator of a ‘pet cemetery’ in Second Life. Through interviewing him we were further exploring the idea of spirituality in second life and questioning why there is a need for virtual graveyards. Virtual graves are easily accessible; with a virtual graveyard you do not need to travel anywhere to pay your respects all you need to do is log in. It is also cheaper to buy virtual space. The virtual world is vast and therefore relatively inexpensive. The virtual grave also has the potential to be more beautiful or perhaps more significant to the person as it can be designed and edited to suit your individual needs, as it is user generated. Moreover it provides a space to house or bury loved ones who are not accepted in the real world, as reflected by the following conversation we had with a Japanese man in Second Life: [4:35] [4:35] [4:37] [4:37] [4:37] [4:38] [4:38] [4:38] [4:39] [4:39] [4:39] [4:39] [4:39] [4:40] [4:40] [4:41] [4:41]
Silverax Greenwood: I made this cemetery :) luma Ashdene: wow really. what gave u the idea of a pet cemetery? Silverax Greenwood: I had a cat. she was 21yo. almost my daughter and best friend. she died about 4 y ago luma Ashdene: awww yes i can see Silverax Greenwood: yes, I still have her bone. I couldn’t find any nice cemetery in real in Japan luma Ashdene: well this beautiful Silverax Greenwood: ty :) luma Ashdene: do you think this place is spiritual Silverax Greenwood: no, not spiritual luma Ashdene: what then? Silverax Greenwood: spiritual is only inside yourself luma Ashdene: so your feelings are spiritual but the place is not Silverax Greenwood: no, some ppl can't have grave of pets in rl, but they want to luma Ashdene: yes Silverax Greenwood: and you can see your pet everyday if you have a grave here luma Ashdene: so are there people who come regularly to see their pet? Silverax Greenwood: yes, not so often, but, they feel ease, they feel they can have grave
Professor of Social Science - research interests include, sociology of work, sociology of technology, organisational and sociological theory and European Economic and Social Integration. A deep interest in the concept of digital death was expressed. He considers the virtual world to be a space where ritual is relevant and feels this could be implemented in the area of digital death. He recounted how he had met a Professor at a conference years ago, referenced him regularly and e-mail occasionally and one day stumbled upon his website and saw that his students had placed his obituary online. He thought it was a student joke and e-mailed the relevant students, only to discover that the professor had actually died. Throughout the interview he also talks about the collection of old computers he has in his basement, these computers contain information that no system can ‘read’ anymore. He occasionally will revive one and explore his research past. Professor of Computer Science 1 - Expertise includes knowledge engineering. She has carried out research in the areas of knowledge engineering, expert systems, deep knowledge models, diagnostic reasoning, temporal reasoning, artificial intelligence in medicine, and hybrid decision support systems. The Professor admitted that she had not previously thought about digital death. She said that the idea of digital death did not interest her, as she had a work and family computer which all members of the family can access. 138
However she became more interested in the topic when the discussion turned to her daughter whose internet presence, she said is a lot more prevalent than hers. Professor of Computer Science 2 - Expertise: Computer Graphics and Virtual Environments, including virtual and augmented reality and applications to cultural heritage. He claimed that the majority of his work now exists solely in the digital world. The backups for this work are also digital. They exist within the university server, which is password protected. The only time he uses paper now, is for fast access and portability. He therefore proclaimed an interest in the idea of creating a living will for his digital self as, he said, he would like his students to feel like they could access his work if he were to die suddenly. PhD candidate, Dept of Computer Science. Key Research Interests: Virtual Teams and Virtual HealthCare Collaborations. We discussed his experiences with virtual teams for the provision of home healthcare, and that in the main they were very positive. Regarding digital death he stated that he did not like the idea of a stranger looking through his computer. However he did admit that his colleague ‘Jim’ had all his passwords and if he were to die he would want ‘Jim’ to go through the records and give his family any relevant information. Psycho-oncologist, Cancer Patients and Friends The psycho-onchologist was interested in the idea of how you deliver ‘bad news’ digitally. However he also felt that the digitalization of death rituals would probably have the consequence of removing guilt from people. He explained that, if for example, one could attend a funeral virtually then less people would feel like they had to come and the physical attendance of funerals would decrease, resulting in further de-socialization. Group debate – a range of candidates aged 18 – 50 This allowed people of different generations to voice different perceptions of this topic. The group debate highlighted that people of the younger generation who have a larger internet presence are more interested in the topic and in a service that would provide a chance to design a will for their digital self. Statement from a Law Student (University of Arkansas) - January 7, 2009 When I first learned about this new idea of creating a living will for a testator’s “digital self,” I was very intrigued and could not help but think of the great potential for this kind of work, especially in today’s tech-savvy generation. In this digital age, a decedent’s personal computer likely contains very sensitive data, and even emotional, invaluable memories. With a societal trend of paperless transactions, many assets are stored in this new virtual world; thus, it is quickly becoming necessary to provide a means of effective disposal for these assets according to the desires and intent of the testator. It will be very interesting to see how this idea develops – whether implementing this concept in accordance with our current system of drafting formal wills, or perhaps creating an entirely new system of digital distribution – our system of wills and testaments is due for an ‘update.’ Online bereavement consultant - Feb 19, 2009 Wow, Stacey! On my first reading of your message I thought it was a hoax email but on looking at it a couple more times I think you are serious. I had not really ever contemplated this idea for clients but I must admit, have thought about what would happen to all my own folders etc, if I was to come to an untimely demise. I am not sure that 'Joe Public' is ready for such ideas just yet but I can see that in the future, this is an area where advice and services may well be needed and requested. I continue to wonder at the speed that modern technology is changing our old 139
manual/mechanical world and appreciate that our young people that have the responsibility of seeing these radical new visions into fruition.
4. EVALUATION AND OBSERVATIONS Our research has led us to revisit a number of the questions we have posed earlier: If the ritual of death is symbolic, then is it still valid if it is recreated in a virtual environment? Also, if the ritual of death is more important than the dead body, then can the virtualization of death rituals assist in the grieving process? Are virtual graveyards helpful in allowing for non-site specific grieving, as on-line memorials and tributes do? And can, or should, a living will for the digital self be developed? The problem with virtual news is that there is no hierarchy of information. There is no digital way of ‘breaking bad news’ because all e-mails look the same, therefore one e-mail could hold an offer for hair care products and another could hold information about the death of a loved one. It is both clinical and shocking to see the words “…is dead” in black and white, when scanning through your inbox. Another question is what happens to the adverts and links to your homepage or blog when you die? “What happens to the revenue you're generating from the grave”19, without proper preparation, what happens to this revenue? In the future will people be “passing along their AdSense and Amazon affiliate accounts in their wills? Will ad networks allow for the inheritance of publisher accounts?”. Throughout our research we have also observed people’s lack of comfort, when considering selling or giving away their personal computers. Can sensitive data ever be fully removed from a computer? Or do we all secretly store graveyards of computers in our basements? Could we assist a computer in committing suicide, or give it a rebirth, by extracting all information and returning it to a pristine form where it could have a new life. Looking at the idea of computers dying before their humans, anecdotal evidence has it that the new VISTA operating system by Microsoft has generated quite a large number of catastrophic failures/crashes (a Google search: ‘vista crashes’, 11 January 2009, returns 2.69 million results). This suggests that as these systems get more and more complex, the possibility of catastrophic failures increases. In human terms this causes the loss of a person’s information and identity. This is due to the loss of work, projects, photos, contacts and others. Another very simple but important observation is the fact that everybody dies. However, death does not exist in the digital world; therefore the amount of dead people’s personal information online is going to keep rising exponentially. If there is no one there to ‘clean up’ the ‘dead’ information, then the amount of information on the web from and about dead people will very soon begin to exceed the amount of information from people who are alive. There may be a time when you log into your social network or flickr.com gallery and find yourself surrounded by a corpus of ‘dead bits’. Perhaps a new standard may be necessary to deal with this issue soon, also taking into account that some of the data may have historical value and should thus not be simply deleted It is these questions and observations that have led us to consider the need for research and examination of digital death. 4.1 Research and application ideas A representative sample of some of these research and application ideas are discussed below, categorized under each dimension. Dimension one (D1): death of a human
19
Kivell J., Who Profits From Your Content When You're Dead? Posted on April 1, 2008 7:36 AM http://www.technologyevangelist.com/2008/04/who_profits_from_you.html
140
A virtual will for the digital self. Perhaps this would be a requirement when joining a new social network. For example Facebook, when you sign the terms and agreements you could be transferred to a service which would contain your online presence and allow you to ‘add Facebook.’ This would create a legally binding will which would enable you to leave digital information, ownership and messages for virtual friends. ‘d-mail’ or death news mail could be a service of how ones’ death is passed to virtual friends. When there is no hierarchy of information and no intonation in the virtual world how do you pass on this sensitive information and to whom? Perhaps there could be a specific colour code? Or some well known e-mail heading? However, how do you avoid ‘junk’ mail exploiting these headings? Dimension two (D2): death of information A service of visual communication to comfort the ‘grieving’ owner whose personal computer has died could be considered. This could include a burial or cremation of the ‘dead’ computer. This piece of communication could be done in a comical or childlike way, perhaps a series of computer narratives with serious undertones warning people of the dangers of not ‘backing-up.’ Dimension three (D3): immortality of information and the need to create death. A number of example research areas/services are listed here: • Design an agreed protocol, or service, which will ‘kill’ all information that is no longer ‘alive’. For example a search engine filter that specifically tags this information as dead and does not present it in response to a search, unless asked for. A supplementary service will be to archive or tag dead information in a specific manner, so as to allow its accessibility, if needed, also for historical value. • Design a digital archaeologist (Kelsey, 2005 ). In the non-virtual world, rubbish and buried bodies are an archaeologist’s bread and butter. So what are the implications for historical data harvesting in a digital world? Who will ensure that the digital information which is considered dead, and takes up space, is archived, and also whether it can be considered as waste/rubbish, or be declared of special historical significance? • Design an online funeral service which identifies all digital possessions/presence of a deceased person and causes the deletion/ modification /archiving or tribute of this information, according to the instructions of the family or the will of the deceased person. • Design a service which would ‘suck out’ all the information on a person’s personal computer so that the computer can be ‘reborn’ and put to use again for example be donated to charity without worrying about shadows left behind. • Design a service which would aid a computer in ‘committing suicide’ after the death of its human master. One example idea is The Fabrication of a Living Will that Deals with one’s Digital Self. Most of our virtual identities do not expire immediately with our body. Information tends to have a different set of laws to the physical world. It will generally remain intact until someone decides to close the accounts. Communities such as ‘Friendster’ write in their user-terms agreement that “the provider of the site’s services [are prohibited] from removing your profile without your express consent.”20 It also specifies that a relative must provide “written proof,” which means that grieving relatives must scan and e-mail a death certificate to each of these communities, if they want the persons account to be closed down. In modern society we are conducting a greater proportion of our lives digitally. We bank digitally. Buy things digitally. Write letters digitally. Archive digitally. Therefore it seems appropriate that people should begin to consider what will happen to their digital lives which now run parallel to our material lives when our material body dies? It seems logical that since we are aware that so many actions in our
20
http://www.friendster.com/
141
life are now digital, that we should be thinking about what will happen to these actions, accounts, and personas once we are dead. A very simple financial example is as follows ‘John’ “ had a niche Internet business …when he died he left no provision for the business. [his family] couldn't access his accounts or pay suppliers… [or] shut the business down. Customers communicated through Outlook, but [they] couldn’t access that either, so [they] couldn’t reach [the] customers to inform them that he'd died"21. Perhaps people simply do not think about the division of their digital assets when thinking about their death.
5. CONCLUSIONS Digital death is an area of research which has received no attention in the past. We believe that there is a great potential for research in this area, as increasingly this world is becoming digital. In this paper we develop the notion of digital death. Digital death deals with the death of a living being and how that translates into a digital framework. It also involves the death of a digital object and how this ‘death,’ affects a living being. In addition it considers how death can be designed into a system of information immortality. Physical death has been a source of curiosity and debate since man appeared on earth. Digital death is a rich and largely untouched field in social informatics. Digital death therefore serves to revitalize this mystique. Researching this topic is a source of inspiration which can lead to the conception of various systems of logic and social behaviours. It can enhance the development of social understanding of digital death and death in the virtual space. This study barely touches the surface of what promises to be a very interesting and complex topic, with many strands of research and development. For example, one area that we see great potential in is information that has been pronounced ‘digitally dead.’ This information has the potential to provide a detailed account of our present society and culture. Therefore the deletion and archiving of such information would have a two-fold effect. Firstly it removes unwanted information from people’s everyday life, however in addition by placing this removed information within an archive it also automatically creates an easily accessible historic reference. Like death in the real world, ‘digital death’ promises to be ever-present in our increasingly digital world. Therefore we propose that its study is crucially important. Our future work will include a more detailed analysis and understanding of the issues in ‘digital death’ and the development of new research directions, products, and services.
6. REFERENCES: Adams D. (2002), The Salmon of Doubt, Pan Books. Baudrillard, J. (1988), The Ecstasy of Communication, Editions Galilee Baudrillard, J. (1993), Symbolic Exchange and Death, Sage Publications Hayles K. (1999), How we became Post Human (Virtual Bodies in Cybernetics, Literature and informatics), University of Chicago Press Ltd Hlamish, L., Hermoni, D. (2007), The Weeping Willow (Encounters with grief) Oxford University Press Kelsey, T. (2005), Digital Archaeology: Capture, Preserve and Share your Family Stories, ISBN: 0972650849, Publisher: Do it yourself Digital Press,. Lipnack, J. and Stamps, J.(1997), Virtual teams. New York: John Wiley and Sons, Inc. Lucky R, (July 1994), "Bits are Bits" Published as: "A bit is a bit is a bit?" IEEE Spectrum
21
Reeves S., What happens to your e-mail when you die? Or your digital photos, or your Web site domain? How to prepare. Updated 12:54 p.m. ET Feb. 1, 2006. http://www.msnbc.msn.com/id/11129851
142
Monroe, Lemer (1980), 'When, why, and where people die' p 87-106 In Death: current perspectives. From Death Society and Hospice Readings for graduate studies in Palliative care Flinders University, Mayfield Publishing Company Panteli, N., Chiasson, M. (2008), Virtuality: beyond the boundaries of space and time. In Panteli, N. & Chiasson, M. (Eds): Exploring virtuality within and beyond organisations: social, global and local dimensions. Palgrave.
143
THE LEGAL FRAME OF INTERNET TECHNOLOGY OF CHATTING, SOCIAL NETWORKING AND VIRTUAL WORLDS
Thomas Papaliagkas Attorney at Law, LLM, 22
INTRODUCTION The right to privacy is a highly developed area of law in Europe. All the member states of the European Union (EU) are also signatories of the European Convention on Human Rights (ECHR). Article 8 of the ECHR provides a right to respect for one's "private and family life, his home and his correspondence," subject to certain restrictions. The European Court of Human Rights has given this article a very broad interpretation in its jurisprudence. In 1981 the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data was negotiated within the Council of Europe. This convention obliges the signatories to enact legislation concerning the automatic processing of personal data, which many duly did. European law and legal tradition are deemed to have a humanistic approach in many aspects of personal rights. It is noticeable though, that EU and US perspectives on data protection and privacy are quite different. That derives from a wholly different approach of their legal traditions in human rights as a restrictive factor on economic development. There seem to be two different approaches between common law and civil law. The United States prefer a 'sectoral' approach to data protection legislation, relying on a combination of legislation, regulation, and self-regulation, rather than overarching governmental regulations. Former U.S. President Bill Clinton and former Vice President Al Gore explicitly recommended in their “Framework for Global Electronic Commerce” that the private sector should lead, and companies should implement self-regulation in reaction to issues brought on by Internet technology. To date, the US has no single, overarching privacy law comparable to the EU Directive.Privacy legislation in the United States tends to be adopted on an “as needed” basis, with legislation arising when certain sectors and circumstances require (e.g., the Video Protection Act of 1988, the Cable Television Consumer Protection and Competition Act of 1992, and the Fair Credit Reporting Act). Therefore, while certain sectors may already satisfy the EU Directive, at least in part, most do not. The reasoning behind this approach probably has as much to do with American laissez-faire economics as with different social perspectives. The First Amendment of the United States Constitution guarantees the right to free speech. While free speech is an explicit right guaranteed by the United States Constitution, privacy is an implicit right guaranteed by the Constitution as interpreted by the United States Supreme Court. Europeans are acutely familiar with the dangers associated with uncontrolled use of personal information from their experiences under World War II-era fascist governments and post-War
22 Thomas Papaliagkas is a lawyer, LLM, Member of Larisa Bar since 2002. He graduated from Law School of Aristotle University of Thessaloniki in February 2001 (LLB, 2:1). He took a postgraduate LLM course at the University of Leicester (2003-2004) in European Union Commercial Law (graduated October 2004). He is a twice elected member of the Council of the Larisa Bar and elected member of the Council of Prefecture of Larisa. He is a senior partner of Opinio Juris Law Firm.
144
Communist regimes, and are highly suspicious and fearful of unchecked use of personal information. World War II and the post-War period was a time in Europe that disclosure of race or ethnicity led to secret denunciations and seizures that sent friends and neighbors to work camps and concentration camps.Europe has experienced atrocities directly related to privacy and the release of personal information inconceivable to most Americans. In the age of computers, Europeans’ guardedness of secret government files has translated into a distrust of corporate databases, and governments in Europe took decided steps to protect personal information from abuses in the years following World War II. Germany and France, in particular, set forth comprehensive data protection laws. In 1980, in an effort to create a comprehensive data protection system throughout Europe, the Organization for Economic Cooperation and Development (OECD) issued its “Recommendations of the Council Concerning Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data.”. The seven principles governing the OECD’s recommendations for protection of personal data were: 1. Notice—data subjects should be given notice when their data is being collected; 2. Purpose—data should only be used for the purpose stated and not for any other purposes; 3. Consent—data should not be disclosed without the data subject’s consent; 4. Security—collected data should be kept secure from any potential abuses; 5. Disclosure—data subjects should be informed as to who is collecting their data; 6. Access—data subjects should be allowed to access their data and make corrections to any inaccurate data; and 7. Accountability—data subjects should have a method available to them to hold data collectors accountable for following the above principles. The OECD Guidelines, however, were nonbinding, and data privacy laws still varied widely across Europe. The US, meanwhile, while endorsing the OECD’s recommendations, did nothing to implement them within the United States23. However, all seven principles were incorporated into the EU Directive.
TITLE 1 Personal data protection in Internet Greek law: Acts 2472/97 and 2774/99 under the light of 2002/58/EC Guide. Limits of personal data process and rights of use. Preconditions.
1) Greek law: Act 2472/97 Before developing the main part of Title I, it would be useful to define some basic meanings. Although common sense would be a useful tool for further analysis, it is really necessary to see what 'personal data', 'sensitive personal data' and many other familiar terms mean for the International, European and Greek Law.24
23
Anna Shimanek, Note, Do you Want Milk with those Cookies?: Complying with Safe Harbor Privacy Principles, 26 Iowa J. Corp. L. 455, 462-463 (2001) 24 Th. Sidiropoulos, To dikaio tou diadiktyou (Internet Law), Athens, (Sakkoulas Law Publications, 2002), p. 205.
145
a) As 'Personal Data' must be deemed "any information relating to an identified or identifiable natural person ("data subject"); an identifiable person is one who can be identified, directly or indirectly (Art. 2 para 1 α of Law 2472/97). Although not mentioned especially, we should bear in mind that this identification may be made in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity, as referred in the provision of Art. 1 para. 1 of the Directive 95/46/EC. This definition is very broad, as long as "personal data" is any data through which anyone is able to link the information to a person, even if the person holding the data cannot make this link. Some examples of "personal data" are: address, credit card number, bank statements, criminal record, etc. b) The notion processing means "any operation or set of operations which is performed upon personal data, whether or not by automatic means, such as collection, recording, organization, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure or destruction;" (Art. 2 para 1 δ Act. 2472/97) c) The responsibility for compliance rests on the shoulders of the "controller", meaning the natural or artificial person, public authority, agency or any other body which alone or jointly with others determines the purposes and means of the processing of personal data (Art. 2 para 1 ζ Act. 2472/97). d) 'The data subject's consent' shall mean any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed. Except for this general definition given in the provision of Art. 2 para 1 h of the Directive 95/46/EC, the Greek law gives further explanations; The previous information of the data subject must include at least the purpose of processing, the data or the categories of data to be processed, the recipients or categories of recipients, as well as the name and address of processor or the name of its legal representative (Art. 2 para 1 ια' Act. 2472/97). By any means, consent may be given by any appropriate method enabling a freely given specific and informed indication of the user's wishes, including by ticking a box when visiting an Internet website. e) The Greek law introduced one more category of personal data, the so-called 'sensitive personal data, which include data related to tribe or nationality of the subject, politic views, religion and philosophical beliefs, taking part in political party or syndicate, health, social welfare and sexual life, to penal sentences, and to participating in any other person's unions like the above mentioned (Art. 2 para 1 γ Act 2742/97, as amended by the provision of Art. 8 para 3 of the Act 3625/2007). The Act 2472/97 is the main law through which the Directive 95/46/EC was transposed into internal Greek law. Due to that fact, it includes the same scopes and aims, the same principles, and many provisions of this Directive. That is the reason, I think, we should examine them in parallel.
2) The Directive 1995/46/EC i) The Directive 1995/46/EC is the main text upon personal data protection in European Union. It introduces regulations aiming to succeed a balance between privacy and free circulation of personal data within EU. It defines the limits of personal data use and asks Member States to create an independent national supervision authority for personal data. ii) The scope of this Directive is to apply to the processing of personal data wholly or partly by automatic means, and to the processing otherwise than by automatic means of personal data which form part of a filing system or are intended to form part of a filing system. On the other hand, the Directive shall not apply to the processing of personal data: a) in the course of an activity which falls outside the scope of Community law, such as those provided for by Titles V and VI of the Treaty on European Union and in any case to processing operations concerning public security, defence, State 146
security (including the economic well-being of the State when the processing operation relates to State security matters) and the activities of the State in areas of criminal law, and b) by a natural person in the course of a purely personal or household activity (Art. 3 of the Directive). iii) Principles The main principles of both the Directive ant Greek Act are common. Generally, personal data processing is forbidden, except when certain conditions are met. These conditions fall into three categories: transparency, legitimate purpose and proportionality. Transparency The data subject has the right to be informed when his personal data are being processed. The controller must provide his name and address, the purpose of processing, the recipients of the data and all other information required to ensure the processing is fair. (art. 10 and 11) Data may be processed only under the following circumstances (art. 7): •
when the data subject has given his consent
•
when the processing is necessary for the performance of or the entering into a contract
•
when processing is necessary for compliance with a legal obligation
•
when processing is necessary in order to protect the vital interests of the data subject
• processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller or in a third party to whom the data are disclosed • processing is necessary for the purposes of the legitimate interests pursued by the controller or by the third party or parties to whom the data are disclosed, except where such interests are overridden by the interests for fundamental rights and freedoms of the data subject
The data subject has the right to access all data processed about him. The data subject even has the right to demand the rectification, deletion or blocking of data that is incomplete, inaccurate or isn't being processed in compliance with the data protection rules. (art. 12) Legitimate purpose Personal data can only be processed for specified explicit and legitimate purposes and may not be processed further in a way incompatible with those purposes. (art. 6 b) Further processing of data for historical, statistical or scientific purposes shall not be considered as incompatible provided that Member States provide appropriate safeguards. Proportionality Personal data may be processed only insofar as it is adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed. The data must be accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that data which are inaccurate or incomplete, having regard to the purposes for which they were collected or for which they are further processed, are erased or rectified; The data shouldn't be kept in a form which permits identification of data subjects for longer than is necessary for the purposes for which the data were collected or for which they are further processed. Member States shall lay down appropriate safeguards for personal data stored for longer periods for historical, statistical or scientific use. (art. 6) When sensitive personal data (can be: religious beliefs, political opinions, health, sexual orientation, race, membership of past organisations) are being processed, extra restrictions apply. (art. 8) Exceptions from the general principle of forbidding processing of sensitive personal data may occur under the following preconditions: (Art. 8 para 2 of the Directive)
147
1. (a) the data subject has given his explicit consent to the processing of those data, except where the laws of the Member State provide that the prohibition referred to in paragraph 1 may not be lifted by the data subject's giving his consent; or (b) processing is necessary for the purposes of carrying out the obligations and specific rights of the controller in the field of employment law in so far as it is authorized by national law providing for adequate safeguards; or (c) processing is necessary to protect the vital interests of the data subject or of another person where the data subject is physically or legally incapable of giving his consent; or (d) processing is carried out in the course of its legitimate activities with appropriate guarantees by a foundation, association or any other non-profit-seeking body with a political, philosophical, religious or trade-union aim and on condition that the processing relates solely to the members of the body or to persons who have regular contact with it in connection with its purposes and that' the data are not disclosed to a third party without the consent of the data subjects; or (e) the processing relates to data which are manifestly made public by the data subject or is necessary for the establishment, exercise or defence of legal claims. 2. Paragraph 1 shall not apply where processing of the data is required for the purposes of preventive medicine, medical diagnosis, the provision of care or treatment or the management of health-care services, and where those data are processed by a health professional subject under national law or rules established by national competent bodies to the obligation of professional secrecy or by another person also subject to an equivalent obligation of secrecy. 3. Subject to the provision of suitable safeguards, Member States may, for reasons of substantial public interest, lay down exemptions in addition to those laid down in paragraph 2 either by national law or by decision of the supervisory authority. 4. Processing of data relating to offences, criminal convictions or security measures may be carried out only under the control of official authority, or if suitable specific safeguards are provided under national law, subject to derogations which may be granted by the Member State under national provisions providing suitable specific safeguards. However, a complete register of criminal convictions may be kept only under the control of official authority. Member States may provide that data relating to administrative sanctions or judgements in civil cases shall also be processed under the control of official authority. 5. Derogations from paragraph I provided for in paragraphs 4 and 5 shall be notified to the Commission 6. Member States shall determine the conditions under which a national identification number or any other identifier of general application may be processed. The data subject may object at any time to the processing of personal data for the purpose of direct marketing. (art. 14) (a) at least in the cases referred to in Article 7 (e) and (f), to object at any time on compelling legitimate grounds relating to his particular situation to the processing of data relating to him, save where otherwise provided by national legislation. Where there is a justified objection, the processing instigated by the controller may no longer involve those data; (b) to object, on request and free of charge, to the processing of personal data relating to him which the controller anticipates being processed for the purposes of direct marketing, or to be informed before personal data are disclosed for the first time to third parties or used on their behalf for the purposes of direct marketing, and to be expressly offered the right to object free of charge to such disclosures or uses. Member States shall take the necessary measures to ensure that data subjects are aware of the existence of the right referred to in the first subparagraph of (b). A decision which produces legal effects or significantly affects the data subject may not be based solely on automated processing of data. (art. 15) A form of appeal should be provided when automatic decision making processes are used. 148
Subject to the other Articles of this Directive, Member States shall provide that a person may be subjected to a decision of the kind referred to in paragraph 1 if that decision: (a) is taken in the course of the entering into or performance of a contract, provided the request for the entering into or the performance of the contract, lodged by the data subject, has been satisfied or that there are suitable measures to safeguard his legitimate interests, such as arrangements allowing him to put his point of view; or (b) is authorized by a law which also lays down measures to safeguard the data subject's legitimate interests. (Art. 15 para 2) iv) Supervisory authority and the public register of processing operations Each member state must set up a supervisory authority, an independent body that will monitor the data protection level in that member state, give advice to the government about administrative measures and regulations, and start legal proceedings when data protection regulation has been violated. (art. 28) Individuals may lodge complaints about violations to the supervisory authority or in a court of law. The controller must notify the supervisory authority before he starts to process data. The notification contains at least the following information (art. 19): •
the name and address of the controller and of his representative, if any;
•
the purpose or purposes of the processing;
•
a description of the category or categories of data subject and of the data or categories of data relating to them;
•
the recipients or categories of recipient to whom the data might be disclosed;
•
proposed transfers of data to third countries;
•
a general description of the measures taken to ensure security of processing. This information is kept in a public register. In all of Member States have been founded a supervisory authority, in a form of an independent body. The provisions of Art. 28 were implemented in Greek law by the provisions of Art. 15-20 of the Act. 2472/97. It is an independent body that monitors the data protection level in Greece, which is called (“Αρχή ∆εδοµένων Προσωπικού Χαρακτήρα” or “Principle for Personal Data”). Transfer of personal data to third countries Third countries is the term used in EU legislation to designate countries outside the European Union. Personal data may only be transferred to third countries if that country provides an adequate level of protection. Some exceptions to this rule are provided, for instance when the controller himself can guarantee that the recipient will comply with the data protection rules. The Commission has set up the "Working party on the Protection of Individuals with regard to the Processing of Personal Data," commonly known as the "Article 29 Working Party". The Working Party gives advice about the level of protection in the European Union and third countries. The Working Party negotiated with U.S. representatives about the protection of personal data, the Safe Harbor Principles were the result. According to critics the Safe Harbor Principles do not provide for an adequate level of protection, because it contains less obligations for the controller and allows the contractual waiver of certain rights. 149
In July 2007, a new, controversial, Passenger Name Record agreement between the US and the EU was undersigned25. In February 2008, Jonathan Faull, the head of the EU's Commission of Home Affairs, complained about the US bilateral policy concerning PNR. The US had signed in February 2008 a memorandum of understanding (MOU) with the Czech Republic in exchange of a visa waiver scheme, without concerting before with Brussels. The tensions between Washington and Brussels are mainly caused by a lesser level of data protection in the US, especially since foreigners do not benefit from the US Privacy act of 1974. Other countries approached for bilateral MOU included the United Kingdom, Estonia, Germany and Greece.
3) The Directive 2002/58/EC
The Directive 2002/58/EC harmonises the provisions of the Member States required to ensure an equivalent level of protection of fundamental rights and freedoms, and in particular the right to privacy, with respect to the processing of personal data in the electronic communication sector and to ensure the free movement of such data and of electronic communication equipment and services in the Community (Art. 1). The provisions of this Directive particularise and complement Directive 95/46/EC for the purposes mentioned in paragraph 1. Moreover, they provide for protection of the legitimate interests of subscribers who are legal persons. This Directive shall not apply to activities which fall outside the scope of the Treaty establishing the European Community, such as those covered by Titles V and VI of the Treaty on European Union, and in any case to activities concerning public security, defence, State security (including the economic well-being of the State when the activities relate to State security matters) and the activities of the State in areas of criminal law (Art. 1 para 3). According to the above mentioned Directive, service providers should take appropriate measures to safeguard the security of their services, if necessary in conjunction with the provider of the network, and inform subscribers of any special risks of a breach of the security of the network. Such risks may especially occur for electronic communications services over an open network such as the Internet or analogue mobile telephony. It is particularly important for subscribers and users of such services to be fully informed by their service provider of the existing security risks which lie outside the scope of possible remedies by the service provider. Service providers who offer publicly available electronic communications services over the Internet should inform users and subscribers of measures they can take to protect the security of their communications for instance by using specific types of software or encryption technologies. The requirement to inform subscribers of particular security risks does not discharge a service provider from the obligation to take, at its own costs, appropriate and immediate measures to remedy any new, unforeseen security risks and restore the normal security level of the service. The provision of information about security risks to the subscriber should be free of charge except for any nominal costs which the subscriber may incur while receiving or collecting the information, for instance by downloading an electronic mail message. Security is appraised in the light of Article 17 of Directive 95/46/EC. Legal, regulatory and technical provisions adopted by the Member States concerning the protection of personal data, privacy and the legitimate interest of legal persons, in the electronic communication sector, should be harmonised in order to avoid obstacles to the internal market for electronic communication in accordance with Article 14 of the Treaty. Harmonisation should be limited to
25
150
requirements necessary to guarantee that the promotion and development of new electronic communications services and networks between Member States are not hindered. The Member States, providers and users concerned, together with the competent Community bodies, should cooperate in introducing and developing the relevant technologies where this is necessary to apply the guarantees provided for by this Directive and taking particular account of the objectives of minimising the processing of personal data and of using anonymous or pseudonymous data where possible26. Service providers should take appropriate measures to safeguard the security of their services, if necessary in conjunction with the provider of the network, and inform subscribers of any special risks of a breach of the security of the network. Such risks may especially occur for electronic communications services over an open network such as the Internet or analogue mobile telephony. It is particularly important for subscribers and users of such services to be fully informed by their service provider of the existing security risks which lie outside the scope of possible remedies by the service provider. Service providers who offer publicly available electronic communications services over the Internet should inform users and subscribers of measures they can take to protect the security of their communications for instance by using specific types of software or encryption technologies. The requirement to inform subscribers of particular security risks does not discharge a service provider from the obligation to take, at its own costs, appropriate and immediate measures to remedy any new, unforeseen security risks and restore the normal security level of the service. The provision of information about security risks to the subscriber should be free of charge except for any nominal costs which the subscriber may incur while receiving or collecting the information, for instance by downloading an electronic mail message. Security is appraised in the light of Article 17 of Directive 95/46/EC27. The European Data Protection Supervisor (EDPS) According to the provisions of the Directive 95/46/EC an independent authority was founded, aiming to watch and guarantee personal data protection; this is the European Data Protection Supervisor (EDPS). The EDPS has three main functions: supervision, consultation, and cooperation. Function of Supervision One of the EDPS' main tasks is to supervise personal data processing by the institutions or bodies of the European Community (EC) that wholly or partially fall within the scope of EC-activities. This supervision work takes various forms. The bulk of it is presently based on notifications of processing operations presenting specific risks. These need to be prior checked by the EDPS. Based on the facts submitted to him, the EDPS will examine the processing of personal data in relation to Regulation 45/2001. In most cases, this exercise leads to a set of recommendations that the institution or body need to implement, so as to ensure compliance with data protection rules. The EDPS also receives complaints from EU staff members as well as from other people who feel that their personal data have been mishandled by a Community institution or body. If a complaint is admissible, the EDPS usually carries out an inquiry. The findings are communicated to the complainant, and necessary measures are adopted. The EDPS may also carry out inquiries on his own initiative. Inquiries and inspections are essential for a supervisory authority to have the means for fact-finding, following up of cases and monitoring of compliance in general. In order to monitor compliance with Regulation 45/2001, the EDPS largely relies on the Data Protection Officers (DPOs) who are to be appointed in each institution/body. Apart from bilateral meetings and contacts with the DPOs, the EDPS also takes part in the regular meetings of the DPO network.
26
Directive 2002/58/EC, points 8-9.
27
Directive 2002/58/EC, point 20.
151
Since January 2004, the EDPS has ensured the supervision of the central unit of Eurodac, a database of fingerprints of applicants for asylum and immigrants found illegally in the EU. An essential aspect of this supervision is the cooperation with national supervisory authorities and the drawing up recommendations for common solutions to existing problems.
EDPS Function of Consultation The EDPS advises the EU institutions and bodies on data protection issues in a range of policy areas. His consultative role relates to proposals for new legislation as well as soft law instruments like communications that affect personal data protection in the EU. He also monitors new technologies that may have an impact on data protection. The objective is to ensure that the EU citizens' fundamental rights to protection of privacy and personal data are maintained, while society evolves. During 2005 and 2006, a clear focus was laid on proposals aiming to facilitate storage and exchange of information in the area of freedom, security and justice. As of 2007, the priorities will broaden, with increasing focus on other areas of Community law, such as electronic communications and information society as well as public health. One of the primary tasks of the EDPS is to examine the data protection and privacy impact of proposed new legislation. The Policy paper of 2005 elaborates how this role is interpreted in terms of limitations in scope, working methods and main orientations. The EDPS uses different instruments in order to exercise this role. The first instrument is a planning tool. Each year in December, the EDPS publishes an inventory of his priorities for the coming year. It lists the most relevant Commission proposals, which may require a formal reaction by the EDPS. Those proposals that are expected to have a strong impact on data protection are given high priority. This may also apply to research projects. The second and most important instrument is the formal, public opinion. By issuing opinions on a regular basis, the EDPS establishes a consistent policy on data protection issues. The opinions are addressed to those involved in the legislative negotiations, but also published on the website as well as through the Official Journal of the EU. A third instrument of intervention is the EDPS comments, which address data protection issues for instance in Commission communications. Moreover, the EDPS also sends letters and gives presentations on specific topics, for instance before committees of the European Parliament. A final instrument which the EDPS has at his disposal is the possibility to intervene in cases before the Court of Justice, the Court of First Instance and the Civil Service Tribunal. Function of Cooperation The third leg of EDPS' activities can best be described as cooperation. It covers work on specific issues, as well as more structural collaboration together with other data protection authorities. This may involve issues which have an impact on how to interpret a provision of Directive 95/46, which has been implemented into national laws. It can also be relevant in cases where similar complaints have been launched in several Member States. The overriding aim of the EDPS is to promote consistency in the protection of personal data. The central forum for cooperation in the EU is the Article 29 Working Party. This is where the national data protection authorities meet to exchange views on current issues, to discuss a common interpretation of data protection legislation and to give expert advice to the European Commission. The EDPS also participates in the work to ensure good data protection in the EU's third pillar, which covers police and judicial cooperation. This includes attending a number of meetings of the Joint Supervisory Bodies of the third pillar information systems. He is also a member of the Working Party on Police set up by the European Conference to prepare advice in third pillar matters. In addition, the 152
EDPS has also taken part in meetings of the Joint Supervisory Authority of the Schengen Information System, which falls under both the first and third pillars of the EU. One of the most important cooperative tasks relates to Eurodac, where the responsibilities for data protection supervision are shared. Eurodac is a large scale IT system which contains digital fingerprints of asylum seekers. It consists of national units (subject to national law), and a central unit (subject to Regulation 45/2001). A coordinated approach is essential, as supervision depends on collaboration between the national data protection authorities and the EDPS. The EDPS therefore organises biannual coordination meetings. Two major data protection conferences are organised each year. Every spring, a European Conference assembles data protection officials from authorities in the Member States of the EU and the Council of Europe. And every autumn, a wide range of data protection experts, from the public as well as the private sector, gather for the International Conference. International organisations which are exempted from national law, often find themselves without a legal framework for data protection. Because virtually all of them process personal data, the EDPS decided to organise a workshop on 'data protection as part of good governance in international organisations' with representatives from some 20 organisations.28
4) About social networking I) Facebook seems to be the most popular site of social networking among others. Other social networking sites like MySpace and Friendster, as well as online dating sites like eHarmony.com, may require departing users to confirm their wishes several times — but in the end, they offer a delete option. Facebook has recently activated an online means for its users to exit their subscription and request a complete eradication of their data from the Facebook site. But it is not in the commercial best interests of the social networking sites to let subscribers go. Until February 2008, Facebook offered only a deactivation option to subscribers, but it kept copies of the account holder's personal information on the company's servers. Facebook said that it was possible to delete an account fully using a cumbersome manual method, but it was difficult and many users complained that Facebook did not provide clear instructions. Beacon is a Web-based application that tracks and publishes the items bought by Facebook members on outside websites. It was introduced in November 2007 without a transparent, one-step opt-out feature for Facebook's subscribers. After a public backlash in the US, including more than 50,000 Facebook users' signatures on a protest petition, Facebook executives apologised and allowed an optout option on the programme. Tensions remain between making a profit and alienating Facebook's users, as well as the regulators in Europe. Facebook says it has about 64 million users worldwide (MySpace has an estimated 110 million monthly active users). The company pays for itself by allowing marketers to access demographic and behavioral information. The retention of old accounts on Facebook's servers appears to be an effort to retain and provide its ad partners with as much demographic information as possible. II. Facebook is the most popular site of social networking. It would be really interesting to see how it works by means of data protection according to European law. Facebook Inc already has an office in London. This also puts them within the alternate definition of “establishment� ( in the UK) in Section
28
The EDPS web page: www.edps.eu
153
3A (a) of the Irish Act as having an office, branch or agency through which he or she carries on any activity. But, one of the difficulties for transnational companies is that the Directive doesn’t allow them to pick just one EU country and comply with its Data Protection laws. Directive 95/46 Recital 19 puts an onus on a Data Controller established in multiple territories to fulfill the obligations of all those states. It is quite interesting to watch Ireland's case: One of Ireland’s obligations is that if a data controller is outside the EEA (which Facebook Inc is) and the data is processed inside this state (which happens with Facebook data) they must “designate a representative established in the State” (per the Data Protection Acts Section 3B(c)). I have not been able to find if Facebook has designated anyone as their representative in Ireland. Consent by the person whose personal data is processed does not remove the duty to register as a data Controller or Processor29. While the jurisdiction of the EC Directive (95/46/EC) and the Data Protection Act 1998 (DPA) apply only to sites or servers based in the EU, the ICO has actively investigated Facebook as well as other social networking sites. The regulations provide that personal data shall be processed "fairly and lawfully" and shall be obtained "only for one or more specified lawful purposes" and "shall not be processed in any manner incompatible with that purpose or purposes". In addition, the DPA requires that personal data shall not be kept for longer than is necessary for that purpose or purposes and, finally, that personal data shall be processed in accordance with the rights of data subjects under the DPA. Retention of personal data by Facebook after data subjects wish to cease subscribing to the site for the purpose of ease of Facebook's administration or for the purpose of inflating the demographic and marketing data gathered by Facebook for commercial purposes is clearly incompatible with regulations. Moreover, the use of the Beacon application to track and publish the items bought by Facebook members on outside websites without an opportunity for the data subject to opt out is at cross purposes with the DPA. It is not only misleading, it does not offer the subscriber the choice to indicate to Facebook, the data controller, that the subscriber does not wish to have its personal information used for a purpose other than that which the data was first collected - social networking, not demographic profiling and marketing.30 Thomas Otter, who wrote an excellent article on Facebook and Data Protection, refers to Facebook Inc as claiming “safe harbour” status. This is a method by which companies and organisations working in countries which have not been deemed to have adequate protection for data may export the data of European citizens to those countries. In effect these organisations pledge to meet the requirements of the Data Protection Directives themselves. US Companies who want this status must register with the US Department of Commerce and have a Privacy Policy which complies with the terms of the Data Protection Directives. The problem for Facebook Inc is that they seem to have grown so quickly that their systems haven’t caught up with their compliance requirements in this area. For example, as reported by Channel Four News late last year, Facebook will resist requests to delete the Personal Data it holds when asked to do so by the data subject. Alan Burlison was the source of that report, and he outlines on his blog the responses he got
29
www.mcgarrsolicitors.ie/tag/data-protection/
30 Susan Mann, Is Facebook Breaking the Data Protection Act?, www.data-strategy.co.uk, posted on 24 APRIL 2008.
154
from Facebook, and then, following his complaints, from the UK Information Commissioner and from Truste, a 3rd party who certifies compliance with European Safe Harbour requirements.31 Here’s the response he initially received; If you deactivate, your account is removed from the site. However, we save all your profile content (friends, photos, interests, etc.), so if you want to reactivate sometime, your account will look just the way it did when you deactivated. After Channel Four News came and interviewed him, he received a follow-up email; We have permanently deleted your account per your request. We donot retain any information about your account once it is deleted, and thus deletion is irreversible. This shows that compliance with the Data Protection principle that a person has a right to have information stored about them amended or erased is technically possible. It just isn’t policy. Which would put Facebook’s real data handling policies at odds with the claim to be a Safe Haven. Which in turn would raise the question of whether it is lawful for it to pass that data outsi0de EEA borders. Which, of course, is exactly what it potentially does every time a developer for the Facebook Platform creates an application. Failure to comply with the provisions of the Data Protection Act is a criminal offence. If European users suffer a loss arising from unlawfully held personal data they would have grounds for an action against Facebook Inc. Facebook’s privacy policy shows that it is aware of the Data Protection Directives. This potential financial risk is something which they will know or ought reasonably to know about. II. Social network service Facebook has made an embarrassing volte-face in the wake of furious reaction to the changes it unilaterally made to its terms of service. On 4 February Facebook -- which originally deleted user records of members who departed the network -- indicated that it would continue to keep a full record of each user's messages, actions and updates even once membership had been terminated. Following the outcry by users, privacy groups and the press, Facebook has agreed to revert to its previous terms, at any rate for the time being. Facebook's new conditions also raised key issues for copyright law too: the site asserted a licence to use all materials posted by its members in perpetuity. Since much of the material on Facebook is not posted by or with the licence of its copyright owners, the legal consequences both for Facebook and its current and past members could be serious.32 III. The same problem may appear to anonymous or pseudonymous users of any such pages. A different class of identifiers having similar characteristics, IP addresses, was considered in the Article 29 Working Party's Opinion 4/2007 on the Concept of Personal Data which considered that: "unless the Internet Service Provider is in a position to distinguish with absolute certainty that the data correspond to users that cannot be identified, it will have to treat all IP information as personal data, to be on the safe side." (page 17) Much of our discussion of that statement has concentrated on the circumstances that might make it technically certain that an IP address could not be linked to an identified person. Typically this will only occur if the network access provider does not know (or does not record) the identity of those to whom it issues IP addresses, for example an open wireless access point that accepts connections from any passing laptop or a cybercafé that does not require individual users to identify themselves. However it has now become apparent that legal, as well as technical, measures could be deployed to prevent linkage, and that this might satisfy the requirements of the Directive to make the identifiers not personal data. This approach has been taken by the EC-funded ACGT medical research project where hospitals provide patient data under contract to an independent third party. The third party
31
www.mcgarrsolicitors.ie/tag/data-protection/
32
www.datonomy.blogspot.com/2009/02/writings-on-wall-for-facebook-data.html 155
replaces all identifiers with pseudonymous ones, in a way that permits researchers to link together records from the same patient but ensures that only the third party can identify the real-world individual to whom the linked records refer. This is technically similar to many pseudonymous identifier systems. 33However the third party in this case also has a legal agreement with the project board (which acts as data controller), stating that it will not disclose the linking information to any researcher. This is considered sufficient to permit the researchers to treat their data, associated only with an unlinkable pseudonymous identifier, as non-personal data. The Article 29 Working Party Opinion confirms that in circumstances where: "re-identification of the data subject may have been excluded in the design of protocols and procedure ... processing may thus not be subject to the provisions of the Directive." (page 20). It therefore appears that it may be possible to ensure by legal agreement that pseudonymous identifiers can be classed as non-personal data. Clearly if a service provider subsequently collects information that allows them to link the identifier to the real-world person (for example by asking the user for their name or e-mail address) then the identifier will become personal data, subject to all the compliance requirements of EU and national laws.34 IV. There is also another interesting argument, concerning social networking pages: If the definition of the Data Protection Directive 95/46/EC (DPD) is applied literally to social networking sites such as Facebook and MySpace, arguably, not only organisations such as FB and MySpace are regarded as "data controllers" (through Art. 4 of the DPD), but individuals who posted information about others (friends or work colleagues etc.) would also be regarded as "data controllers" and thus have to adhere to the legal rules laid down under the Data Protection Directive 95/46/EC (ie. Art. 7 of the DPD fair and lawful processing; not excessive etc) unless it could be shown that the exemptions under Art. 9 that processing was intended for journalistic, artistic and literary purposes or that Art. 13 exceptions apply. That seems to be a logical argument, as long as any individual is capable of posting any other's personal data without a previous check from the host or the administrator. Nevertheless, there seems to be a need for introducing amendments in Directive 2002/58/EC. Protection provided by the Directive should also include “publicly accessible private networks” within the framework and the mandatory notification of “data security breaches”. COMMISSION: THE PROPOSED APPROACH In order to tackle the challenges presented by network and information security, the Commission proposes an approach which is based on dialogue, partnership and empowerment.35 Dialogue The Commission proposes a series of measures designed to establish an open, inclusive and multistakeholder dialogue: •
benchmarking exercise for national policies relating to network and information security. This should help identify the most effective practices so that they can then be deployed on a broader basis throughout the EU. In particular, this exercise will identify best practices to improve awareness among small and medium-sized enterprises (SMEs) and
33
Ann West, draft of 27 September 2008.
34
As mentioned above.
35 Communication from the Commission of 31 May 2006: A strategy for a Secure Information Society - "Dialogue, partnership and empowerment" [COM(2006) 251 final - not published in the Official Journal]. 156
citizens of the risks and challenges associated with network and information security; •
a structured multi-stakeholder debate on how best to exploit existing regulatory instruments. This debate will be organised within the context of conferences and seminars.
Partnership Effective policy making requires a clear understanding of the nature of the challenges to be tackled. This calls for reliable, up-to-date statistical and economic data. Accordingly, the Commission will ask ENISA •
to build up a partnership of trust with Member States and stakeholders in order to develop an appropriate framework for collecting data;
•
to examine the feasibility of a European information sharing and alert system to facilitate effective responses to threats. This system would include a multilingual European portal to provide tailored information on threats, risks and alerts.
In parallel, the Commission will invite Member States, the private sector and the research community to establish a partnership to ensure the availability of data pertaining to the ICT security industry. Empowerment The empowerment of stakeholders is a prerequisite for fostering their awareness of security needs and risks, thus promoting network and information security. For this reason, Member States are invited to • proactively participate in the proposed benchmarking exercise for national policies; • promote, in cooperation with ENISA, awareness campaigns on the benefits of adopting
effective security technologies, practices and behaviour; • leverage the roll-out of e-government services to promote good security practices; • stimulate the development of network and information security programmes as part of higher-
education curricula. Private sector stakeholders are also encouraged to take initiatives to • define responsibilities for software producers and Internet service providers in relation to the
provision of adequate and auditable levels of security; • promote diversity, openness, interoperability, usability and competition as key drivers for
security, and to stimulate the deployment of security-enhancing products and services to combat ID theft and other privacy-intrusive attacks; • disseminate good security practices for network operators, service providers and SMEs; • promote training programmes in the private sector to provide employees with the knowledge
and skills necessary to implement security practices; • work towards affordable security certification schemes for products, processes and services that
will address EU-specific needs; • involve the insurance sector in developing risk management tools and methods.
General Evaluation – Conclusion The Commission realised that diverging data protection legislation in the EU member states would impede the free flow of data within the EU zone. Therefore the European Commission decided to harmonize data protection regulation and proposed the Directive on the protection of personal data. The data protection rules are applicable not only when the controller is established within the EU, but whenever the controller uses equipment situated within the EU in order to process data. (art. 4) Controllers from outside the EU, processing data in the EU, will have to follow data protection 157
regulation. In principle, any online business trading with EU citizens would process some personal data and would be using equipment in the EU to process the data (i.e. the customer's computer). As a consequence, the website operator would have to comply with the European data protection rules. The directive was written before the breakthrough of the Internet, and to date there is little jurisprudence on this subject. Concerning EU law in general, the Polish Inspector General for Personal Data Protection mentioned that the three-pillar structure of the European Union continues to cause a lack of complex, homogenous regulation of the problems of data protection. Within the internal market, such regulation is assured by the Directive 95/46/EC, which, incidentally, was created to do away with the barriers to the internal market. The present Third Pillar of the EU still does not possess such regulation, despite the provisions of Article 30(1) b of the EU Treaty, ordering to complement the rules related to data exchange in the Third Pillar with appropriate principles for data protection. So far, the works on a framework decision on data protection in police and judicial matters have not proved successful, the regulations in force have a very limited character, and the Convention 108 of the Council of Europe, which is applicable in this field, no longer responds to the current needs. To complete the image, one also needs to mention the legal uncertainty related to the processing of data at the junction of the First and the Third Pillar, which was fully revealed by the judgement of the European Court of Justice on PNR data transfer to the US, which confirmed that such operations lie outside the scope of the directive 95/46/EC36. Nowadays, due to the new threats to public security and to the international situation, a very dynamic evolution of police and judicial coopeation in criminal matters occurs, comprising, among others, various new forms of transborder data exchange. What is worth mentioning at this point are, among others, the principle of accessibility, and the initiatives such as the Pruem Treaty or the European PNR. This gives rise to the question whether the future legal framework at the EU level constitutes an adequate answer. To answer it, we need to analyse what changes are to take place. The principal effect of the Lisbon Treaty, as before of the European Constitution, is the abolition of division of the EU into three pillars. As you know, the Lisbon Treaty after the changes introduced retains two treaties: the Treaty on the European Union and the Treaty on the Functioning of the European Union, created by redrafting the former EC Treaty. The former Third Pillar Issues shall now lie within the scope of Title V of the Treaty on the Functioning of the European Union. The mode of adopting new legal acts in this field, is also to change and it shall now be consist of ordinary legislative procedure, involving both the Parliament and the Council, and qualified majority voting in the latter institution. It needs to be remembered that the abolition of the three-pillar structure of the EU implicates homogenisation of the protection of fundamental rights and of the system of legal protection itself and therefore – reinforcement of the protection of fundamental rights. As far as data protection is concerned, the vital issue is the introduction of a separate provision dealing with data protection into the Treaty on the Functioning of the European Union, that is into primary legislation. Furthermore, this provision, Article 16, is incorporated into Title II of the Treaty, “General Provisions�. On the one hand, it guarantees the subjective right to data protection, on the other hand it constitutes a clear legal framework for issuing legal acts on the protection of natural persons with regard to the processing of personal data by EU institutions and authorities and by the Member States in the course of implementation of the Community legislation. It needs to be underlined again that data protection regulations shall be adopted by the European Parliament and Council in the course of ordinary legislative procedure. I would also like to add that this article shall constitute a legal framework for the activities of independent data protection authorities, which is vital at present.
36 Spring Conference of European Data Protection Authorities Session on Privacy and Security, Michal Serzycki, the Inspector General for Personal Data Protection, Poland. Italy, 2009
158
There can be no doubt that Article 16 shall constitute a strong and extensive base for guaranteeing data protection in EU legislation. It is worth noting that the provision was complemented with Article 39 of the Treaty on the European Union, modifying its contents with regard to the actions taken within the framework of the common foreign and security policy by excluding the participation of the European Parliament in legislative procedure in this field. It should also be borne in mind that with regard to police and judicial cooperation in criminal matters, Title VII of Protocol 10 of the Lisbon Treaty introduces certain transitory provisions. The legal acts adopted so far in this field shall remain in force for 5 years unless they are repealed or modified. The competencies of the European Court of Justice in relation to such provisions are also restricted. Such provisions give rise to many questions, for example whether the transitory provisions shall also apply to the acts regulating data protection issues within the framework of the current Third Pillar of the EU, based on Article 16. It seems that they shall not, which means that the competencies of the European Court of Justice in this field shall be wider. At the same time, a question needs to be asked on the status of the framework decision on personal data protection in the Third Pillar, if it shall come into force before the end of the year. If the framework decision does not come into force before the Lisbon Treaty, we shall find ourselves back at the starting point. On the one hand, there shall be the question of widening the scope of Directive 95/46, without forgetting the exemptions specified in Article 13 and Declaration 21 of the Treaties, according to which “specific principles concerning the protection of personal data and the free flow of such data within the framework of police and judicial cooperation in criminal matters may prove necessary due to the specific nature of these fields�, which may indicate a need to adopt separate legal acts. What needs to be remembered at this point is the increase of the role of the European Parliament in the legislative procedure and the introduction of qualified majority voting in the Council which may prevent e.g. the bad practice which we experienced in relation to the works on the framework decision, involving the adoption of the lowest commonly accepted level of guarantees of data protection or on total deletion of provisions not accepted by certain countries. As a result, Article 16 may help ensure more coherent principles of data protection within the EU. On the other hand, we cannot forget the works on various legal acts on the use of personal data that are currently under way. Obviously, the entities involved in the process will be interested in finishing it before the entry into force of the Lisbon Treaty. This gives rise to the next question: how shall we act in the given situation? In the context of personal data protection, it is vital that the Lisbon Treaty gives the EU Charter of Fundamental Rights a legally binding character and allows the European Union to accede to the European Convention on Human Rights. At the same time Chapter VII of the Charter contains provisions designed to ensure that the introduction of the catalogue of fundamental rights shall not imply widening the scope of competencies of the EU without the consent of the Member States37. Moreover, the Charter confirms the rights included in the European Convention on Human Rights and the rights based on the constitutional tradition of the Member States. Article 8 of the Charter, guaranteeing the right to the protection of personal data, may have direct effect, although in practice it will probably be used to control the lawfulness of legislation and to interpret it. The Member States Protocols on the Charter of Fundamental Rights, including Article 8. Many Polish specialists in European law draw attention to the unclear nature of the protocol itself, and of its real effects and underline that the content of the Protocol largely overlaps with the content of chapter VII of the Charter and of Article 6 of the Treaty on the European Union. Consequently, one could say beyond a doubt that this protocol does not exempt the fundamental right to data protection in Greece as this right is also guaranteed in Article 16 of the Treaty on the
37 Spring Conference of European Data Protection Authorities Session on Privacy and Security, Michal Serzycki, the Inspector General for Personal Data Protection, Poland. Italy, 2009
159
Functioning of the European Union and in Article 9A of the Greek Constitution, which guarantees a separate and autonomous right to the protection of personal data. II. Under that light, recently there have been some steps forward. On 11 March 2009 the European Parliament voted some amendments in legislation about transparency and personal data concerning professional activities. The European Data Protection Supervisor (EDPS) has expressed his satisfaction with how the Parliament addressed the delicate issue of reconciling transparency requirements with data protection obligations. Although some legal fine-tuning is still needed, the amendments adopted by MEPs clearly reflect an approach which strives for a proper balance between transparency on the one hand and data protection on the other. Reflecting on the vote, Peter Hustinx says: "These amendments create clarity and prevent an overzealous application of data protection rules in this area. They confirm that data protection does not stand in the way of public disclosure of personal information in cases where the person involved has no legitimate reason for keeping the data secret." According to the Parliament, only when the privacy and integrity of persons are really at stake, can information be withheld. This means for instance that information about professional activities of persons involved in EU affairs will be disclosed, unless there are specific circumstances in which disclosure would adversely affect the person concerned. In the past, access to such information was often refused by the institutions basing themselves on data protection requirements. The EDPS expresses his hope that the position taken by MEPs will be upheld during the negotiations between the Commission, Council and the European Parliament in the coming months.38 III. The Hellenic Data Protection Authority plays a semantic role in the Greek State as a watchdog of human rights and data protection. Unfortunately, in the last two years the Greek State does not seem to feel comfortable due to the Data protection Authority's announcements. Some months ago, the General Advocate of the Supreme Court of Appeals of Greece, issued a direction defining that an Action of Hellenic Data Protection Authority should not be applied by the Police and Greek courts. That guided to the resignation of the President and five members of the Authority. The Art. 29 Working Party is deeply concerned about the development taking place in Greece after the resignation of the President and 5 members of the Hellenic Data Protection Authority. In a common statement, the EU Data Protection Commissioners stressed the importance of an independent supervision as foreseen by Directive 95/46/EC. The current situation has immediately to be remedied with a view to reestabishing a functioning independent Data Protection Authority in Greece .� IV. After all these arguments, it is obvious that appropriate solutions cannot be discovered without much of a debate. There seems to be a legislative gap in personal data control in social networking pages and virtual reality pages. As long as the user is obliged to give his real name and a real e-mail address to the administrator, he is not pseudonymous. He certainly is identifiable according to Art. 2 para 1 of Greek Law 2472/97 and Art. 2 of the Directive 95/46/EC. So it remains a serious problem how such pages will be subjects of national laws and Community law. There is a need of some amendments, as mentioned above. The Greek Conseil d' Etat in its case-law admits that the press (among which should be the web pages and Internet in general deemed) has the right to self-restrain within some borders. The National Authority of Radio and Television has the authority to sentence, but it is admitted that the Press should be auto-balanced. This approach, though, guided to a very bad quality of programmes. So, I do not think that such an approach would really be the right solution for data protection. On the contrary, I think that the main problem is not theoretic; the problem is execution. The legal frame can be easily amended, as soon as we find the problem. But, really, who is able to catch the illegals?
38
EDPS Press Release of 12 March 2009, official wed site of the EDPS www.edps.eu
160
E-LEARNING SOCIAL NETWORK ANALYSIS FOR SOCIAL AWARENESS Dr. Niki Lambropoulos London South Bank University, UK * Educational Office of Achaia, Greek Ministry of Education & Religious Affairs, Greece nikilambropoulos@gmail.com Abstract: The last decade has seen the emergence of virtual learning environments in ComputerSupported Collaborative Learning (CSCL). The Web 2.0 and social networking defined the beginning of the 21st century. However, the focus on social interactions in education can be traced back to the beginning of the 20th century. This paper discusses the use of Social Network Analysis (SNA) as a suitable methodology to observe and analyse social interactions in e-learning. A case study conducted in the Greek School Network and an experimental environment is presented with the use of both desktop and real-time SNA tools. It also suggests that real-time SNA can increase e-learning participants’ social and cognitive levels resulting in new knowledge construction and thus, increasing the e-learning quality.
1. INTRODUCTION The importance of social awareness in e-learning has only recently been investigated despite the fact that it is central to socio-cultural learning theories since the beginning of the 20th century (Dewey, 1938/1963; Vygotsky, 1981; Dron, & Anderson, 2007). Effective collaborative e-learning environments should consider not only the transmission of information and knowledge, but also the social interactions between participants (Dillenbourg et al., 1996; Kreijns et. al., 2002). Thus, the research question is: What tools and techniques can provide ways to observe, analyse, and measure the individual’s presence and social presence for social awareness? The earliest e-learning systems, in the 1970s, which were based upon time sharing mainframe technology already incorporated a social dimension. These systems included the Programmed Logic for Automated Teaching Operations (PLATO) (for a review see Van Meer, 2003) and the Timeshared, Interactive, Computer-Controlled Information Television (TICCIT) (Bunderson, 1973). The popularisation of the Internet via the World Wide Web in the 1990s was quickly exploited by educationalists. Current Learning Management Systems (LMS) are anchored in solid educational theories. For example, Moodle is based upon socio-constructivism (Moodle, n.d.). However, social awareness has been rarely exploited.
2. SOCIAL AWARENESS Social presence and co-presence awareness has been connected to self-presentation and location visibility on a network. Short and colleagues working on studies about discussion on the phone defined social presence as the “degree of salience of the other person in a mediated communication and the consequent salience of their interpersonal interactions” (1976:65). More studies also referred to the concepts of immediacy as the psychological distance (Wiener & Mehrabian, 1968) and intimacy as the interpretation degree of interpersonal interactions (Argyle & Dean, 1965). Later, social presence awareness was the degree by which a person was perceived as real in an online conversation (Meyer, 2002:59). LMS have also reduced capacity to transmit cues for such locality and behaviour of the person. It is apparent that the researchers only recently considered the importance of computermediated social experience and e-learning social awareness in particular. In this paper, social awareness as social presence and co-presence will be considered the first social level in e-learning; the second is the cognitive level and will not be presented in detail here.
161
2.1 Real-Time SNA tools The Java Universal Network/Graph Framework (JUNG) was found to be compatible to Moodle and was integrated in Moodle for the purposes of this study. Visualisation Interactions Tools (VIT) can depict social interactions with SNA Nodes and Centrality windows to open in Java applets. Each node represents a unique user/learner and the number of messages is indicated as numbers as well as on the interaction lines; the more the messages, the thicker the interaction lines. This representation depicts interaction density (weight), reciprocity (preferences) as well as in- and out-degree centrality (direction). Closeness as the interaction speed was represented on both graphs as the geodesic distances indicate the temporal distance between the messages. Thus the tools were designed as follows (Figure 1):
(a)
(b)
Figure 1. Visualisation Interaction Tools (VIT) Nodes(a) and Centrality(b) VIT depict the relationship between interactivity and social presence. Such individual and group locality depicts the spatio-temporal relationship in the Moodle forum. The participation tools and VIT were implemented in the Moodle Learning Management System. The methodology and the case study are presented next.
3. METHODOLOGY: SOCIAL NETWORK ANALYSIS Until recently, SNA was rarely used in education. The limited educational studies employed desktop tools to manually insert the data and this is a time consuming process (Breiger, 2004; Bender-deMoll & McFarland,2006; Dawson et. al., 2008). The researcher has to input the data either based on observation (Petropoulou, 2006) or extract the data in a file such as excel files (Jeong, 2005). Social Network Analysis (SNA) can depict e-learners’ interactions. This is feasible because the relationships based on text and words only have limited capacity to represent the social network; in other words, photos provide a different view of a group of people than the script of what they said. SNA focuses on global (found also as complete or group) and ego networks. Global cohesion and centrality were investigated using UCINET (Borgatti et al., 2002); cohesion can represent the interactions’ weight (density), participants’ preferences (reciprocity), any small groups (cliques), and similar behaviour (structural equivalence); centrality can depict interaction direction and is analysed only via the realtime tool due to lack of space.
3.1 The study The final research was consisted of two studies, the second built on the first. The first case study was undertaken without the tools on the Greek School Network (GSN) and continued to the experimental environment with the same participants, these were in-service Greek teachers. The tools were incorporated in an experimental Moodle, version 1.4.5 as with Moodle at GSN. The studies started on 01/03/2007 and finished on 31/03/2007 with 5 e-tutors including the first author. The pre-post questionnaires were cleansed and screened using the Hierarchical Clustering Explorer 3.0 tool for normality skewness and kurtosis. Forty questionnaires were finally accepted for analysis. Most of the 162
participants were primary school teachers (n=36, 36%), and IT secondary school teachers (n=22, 28%); there were 9 female (n=9, 22.5%) and 31 male (n=31, 77.5%). The analysis showed 1 participant between 20-30 years old (n=1, 3%), 14 between 30-40 years old (n=14, 36%), and 24 older than 40 years old (n=24, 61%) (1 missing).
3.2 Global Social Network Analysis The level of global cohesion was measured by assessing network density, reciprocity, cliques, and structural equivalence. Density is the proportion of possible links in network as it is the ratio of the number of links present in the network, to the maximum possible links. Density was evaluated by the adjacency connection reports in UCINET. E-learners’ density was rather low, 0.0256 in Moodle@GSN, however stable in the experimental environment (0.0418). This means that 2.6% in Moodle@GSN and 4.7% in the research pool of all possible links were present; however, there was an increase in density. The participants actually recognised their limited participation; they said they were not as active as they wanted to be. In addition, the 2 highest posters in Moodle@GSN influenced the groups’ density level; this means that the actual increase in participation was almost doubled (0.0418 0.0256= 0.0162). This was also evident in the discussion forum text richness, as it was doubled in the research pool (for more info see, Lambropoulos, 2008). Reciprocity in SNA is the number of ties that are involved in reciprocal relations relative to the total number of actual ties (Hanneman & Riddle, 2005). Reciprocity appears higher within the e-learners (Graph 1):
(a)
(b)
Graph 1. Reciprocal ties in GSN (a) & the research pool (b) There were 4 reciprocal ties in GSN (28.6%) and 10 (71.4%) in the research pool. However, due to the e-tutors role in GSN, it appears more as an evolutionary process. The increase of reciprocal ties is another indication of evolution in discussion from monological to dialogical sequences between two participants. Strong and weak reciprocal ties can also define strong or weak relationships within an elearning community. Therefore, reciprocal ties can maintain a strong social network; thus, they are important for knowledge exchange and community knowledge building as it means members’ constant by give and take within a community (Preece, 2004). Reciprocity is also related to the social exchange theory (Blau, 1964); it posits that individuals engage in social interaction based on expectations or the benefits active participants can get from active participation, for example some sort of personal gain or status. However, reciprocity can also be triggered by intangible returns in the forms of intrinsic satisfaction and self-actualisation (Äkkinen, 2005). Based on participants’ opinions for feeling guilty because of inadequate participation in the course as well as their initial voluntary involvement in the GSN courses, it appears that their target was learning, and thus, only implicitly related to the social exchange theory. This also means that social loafing was not evident in this study as they rather had shared interests and values.
163
A clique is a subgroup, a set of actors with each being connected to each other as a maximal complete subgraph of three or more nodes (members) adjacent to each other and there are no other nodes in the network that are also adjacent to all of the members of the clique (Laghos, 2007). Cliques may overlap, that is a forum member (node) can be a member of more than one clique (Bock & Husain, 1950). The results presented in the following table are cumulative and refer to cliques created by 3, 4, 5 and 6 participants. Most cliques were created by 3 participants in both environments. The e-tutors dominated the cliques gathering up to 6 participants. The cliques were developed without any intervention by any of the participants, e-tutors or myself. It is interesting to note that the top scorers had inter-clique connections. When the cliques increase, the social network remains active and thriving, especially if elearners interact with other e-learners who did not appear in a clique before; these are the activated lurkers. In other words, the absence of cliques could have indicated a lack of clustering that would have reflected the prevalence of weak ties. As most the participants did not know each other before the study and were more skilled in the research environment, the cliques were the glue for forums. However, what fostered the cliques was not investigated in this study. Structural equivalence describes the actors who have similar patterns of relations to others in the network and exhibit similar communication behaviour. It presents a different clustering view within a human network. Equivalence is important for generalizations about social behavior and social structure; actors must not be thought about as unique persons, but as examples of categories (sets of actors) who are in some way, "equivalent� (Hanneman, 2001). Two actors (nodes) are said to be structurally equivalent if they have identical ties with themselves, each other and all other vertices (de Nooy et al., 2005). It is computed by the Euclidean distance of tievalue from and to all other nodes (Lorrain & White, 1971). The CONCOR technique (CONvergence of iterated CORrelations; White et al., 1976) uses dendrogrammes (tree-diagrammes) for hierarchical clustering whereas other techniques use algorithms to calculate network members’ individual behaviour (e.g. Everett & Borgatti, 1993). The CONCOR technique calculates Pearson’s correlation coefficient between columns and depicts whether two nodes are structurally equivalent if the corresponding rows and columns of the adjacency matrix are identical. So the degree to which two nodes are structurally equivalent can be evaluated by measuring the degree to which their columns are identical. CONCOR is a divisive top-down clustering technique; it begins with one group and then divides it up so the dendrogramme looks like an inverted tree. This structure is calculated and thus artificial, resulting in failing to identify observed clusters. Interpretation of dendrogrammes for network clustering is as follows: the labels of the actors are given on the left in UCINET; the network positions appear as lines; the numbers at the top are the clustering levels, indicating the number of clusters at the level of sharing at least 3 ties; the column in the middle is the row number in the UCINET matrix for the network. (Dividing clusters of 3 or less individuals is not preferable as correlations get very unstable) (Graph 2):
164
Graph 2. Structural equivalence dendrogrammes in GSN (all) If each line represents a participant, the CONCOR dendrogramme reveals 7 splits on the first level and 2 splits on a second level with four actors participating in all groups. This means that 7 and out of them 3 actors had exhibited similar behaviour (Graph 3):
Graph3. Structural equivalence dendrogrammes in the research pool (all) In the research pool 3 participants were active in 2 second level groups and 5 first level groups. The next CONCOR dendrogramme reveals 7 first and 4 second level participants (Graph 4):
Graph 4. Structural equivalence dendrogramme in GSN (e-learners) Here, the ties are less than the one with the researcher and the 2 high participation e-tutors, however, the overall structure of the groups remain the same. Lastly, the next dendrogramme refers to the elearners in the research pool (Graph 5):
165
Graph 5. Structural equivalence dendrogramme in the research pool (e-learners) This graph reveals 5 first and 2 second level multi-actor positions with one solo-actor position. In conclusion, if grouping actors with equivalent behaviour, the results from the previous dendrogrammes appear as follows (Table 1):
Table 1. Equivalent e-learners EQUIVALENT E-LEARNERS Moodle@GSN
Research Pool
All E-learners All E-learners 7
7
5
5
2nd level 3
4
3
2
1st level
Structural equivalence seems to be steady in the two e-learning environments. There were 7 and 5 participants with first level equivalence in the two research pools. There was one more e-learner with second level equivalence if e-tutors were excluded (4-3=1) whereas it was the opposite situation in the research pool with one less e-learner (3-2=1). In other words, more e-learners were imitating e-tutors’ and other e-learners’ behaviour, and thus passive behaviour was decreasing. This was in accordance to participants’ comments on watching what the e-tutors were doing and learning vicariously. This means that observation had a positive effect in replicating behaviour active participation especially if the participants did not have previous knowledge of working and learning online. Overall, the social network analysis attributes indicated that the interactions’ weight (density) was doubled in the research pool; participants’ preferences (reciprocity) were also significantly increased; more similar behaviours (structural equivalence) were observed in Moodle@GSN rather than the research pool; and there were some small groups (cliques) that remained almost the same throughout the study.
166
3.2.1
Local Nodes and Centrality in Real-Time
Just as a photo and a recording give a different ‘picture’ of two people discussing something, so too, two SNA tools running in real time aimed to provide a different viewpoint of the discussion and triangulate the events of active participation and collaborative e-learning. Visualisation Interactions Nodes (VIT Nodes) and Centrality (VIT Centrality) were integrated in Moodle in the research pool. A different abstract representation was given with regard to interaction density (weight), reciprocity (preferences) as well as in- and out-degree centrality (direction). Closeness as the interaction speed was represented on both graphs as the geodesic distances indicate the temporal distance between the messages. Information control (betweenness) could also be observed. (Note that the participants were not given specific information on the exact use of these tools as the tools should indicate their own use (usability). In VIT Nodes the individuals are represented as circles (nodes), the direction of the messages is indicated by an arrow and the number represents the number of messages (Graph 6):
Graph 6: VIT Nodes in CeLE IX P37 was the information broker in this CeLE. The reciprocal tie with O2 was an argument. She also responded to her own message a couple of hours later after the argument with O2. Most participants were replying to P37 and two of them talked to each other. It is interesting that this CeLE was developed by different individuals with only two interlocutors exchanging 2 messages. In other words, the discussion was a collaborative activity between 7 individuals. VIT Centrality provided a different viewpoint (Graph7):
Graph 7. VIT Centrality in CeLE IX In VIT centrality P37 is clearly located in the middle of the e-learning social network. VIT centrality also indicates the response time space related to geodesic distances between the participants. As a 167
central connector and information broker she moved the knowledge around leading to a new proposition by taking into account her co-learners responses even though they appeared as low activity e-learners (i.e. only O2 was an e-tutor).
4. DISCUSSION Overall, the Visualisation Interaction Tools Nodes and Centrality provided opportunities to the elearning participants to observe their personal styles and performance within the e-learning community as well as observe small groups created within their discussions. Being self-aware corresponds to selforganised learning and development. Information organisation roles can be unofficially assigned to etutors to support the collaborative e-learning development. To sum up, it appears that after the initial knowledge acquisition and information exchange, the collaborative techniques and tools helped participants to learn from the e-tutors, their co-learners and on their own. As they did not have any previous knowledge of collaborative learning and techniques they acquired this knowledge for community knowledge building. This means that e-tutors have a complex job that incorporates moderating as well as e-tutoring (Salmon, 2000). It was also suggested that knowledge awareness plays a major part in the creation of opportunities for efficient and effective collaborative opportunities (Ogata & Yano, 2000) which leads to the fact that different learning styles are supplementary to each other in e-learning environments. Lastly, with regard to collaborative e-learning quality, the participants learned from the e-tutors (instructional learning), the other e-learners (collaborative learning) as well as on their own how to work together and how to use the new tools and new technologies (vicarious and self-organised learning). The discussions were found to build on progressive discourse and fill the middle space between internalisation and externalisation in the form of monological to dialogical sequences. This was evident not only within one but in the participation process as monological postings stand alone without open clues for dialogue as in information provision. In a way, e-learners were progressively adopting a two-way communication. This means that there were more clues as opportunities for critical engagement in dialogue in the research pool caused by two events, the initial need for familiarisation with collaborative e-learning in the early stages of the online course and the sleeper effect, and the use of MessageTag that revealed the collaborative e-learning structure. Moreover, correcting communication gaps between the collaborative learning discussion stages is feasible for the e-learning participants.
5. CONCLUSIONS & FUTURE TRENDS This research investigated the integration of tools and evaluation techniques in popular Learning Management Systems to support social awareness as e-learners’ presence and co-presence to increase the quantity and quality of interactions in e-learning. The increased participation from passive to active fostered interaction flow and continuity and thus, confirmed the concept of learning as participation (Lave & Wenger, 1991). The tools need to support developing a sense of identification and co-presence between the e-learners. The e-tutors created groups with similar behaviour around them, so an implication refers to simulating (cloning) e-tutoring via vicarious learning. Increasing reciprocity can facilitate the transition from monologues to dialogues as there are increasing clues as opportunities for critical engagement in dialogue. The etutors can guide e-learners into the journey of critical thinking and knowledge co-construction and then leave them on their own capabilities (fading). Therefore, in such favourable circumstances and usable tools, passive participants are highly likely to get engaged. Educators and designers need to revisit teaching and learning strategies and their relationship to the level of e-learners’ engagement taking under consideration the strong idiosyncratic character of e-learners’ interaction and engagement. Consequently, technologies need to be able to adapt to individuals’ changing needs, learning and interaction styles. Social awareness was only recently related to social intelligence. The latter has been accepted as an important soft skill (Goleman, 2007) even though it was mentioned in Thorndike (1920) as knowledge to manage social situations and ‘act wisely in human relations’. An implication is linked to the socio168
constructivist learning, thus social intelligence can be a significant factor for successful e-learning communities. Associated tools need to function on multiple levels supporting interaction for individuals, small groups and networks by educational social software.
ACKNOWLEDGMENTS This paper presents part of a doctoral research conducted at the centre of Interactive Systems Engineering, London South Bank University, with Dr. Xristine Faulkner and Professor Fintan Culwin. The Educational office of Western Greece, the Greek Ministry of Education and Religious Affairs provided funding for this study as paid leave of absence for the author. The tools were built in cooperation with Intelligenesis Consultancy Ltd (http://intelligenesis.eu). Many thanks to the Greek School Network and Michael Paraskevas and Vangelis Grigoropoulos as well as all the Greek teachers who participated in the study.
REFERENCES Ă„kkinen, M. (2005). Conceptual Foundations of Online Communities (Working Paper W-387). Helsinki: Helsinki School of Economics. Retrieved 14/02/2006, from http://helecon3.hkkk.fi/pdf/wp/w387.pdf. Argyle, M. and J. Dean (1965).Eye-contact, distance and affiliation. Sociometry 28: 289-304. Bender-deMoll, S. and D. A. McFarland (2006). The Art and Science of Dynamic Network Visualization. Journal of Social Structure , 7(2). Blau, P. (1964). Exchange and Power in Social Life. New York, NY: John Wiley & Sons. Bock, R. D., & Husain, S. Z. (1950). An adaptation of Holzinger's B-coefficients for the analysis of Sciometric data. Sociometry, 13, 146-153. Borgatti, S. P., M. G. Everett, et al. (2002). Ucinet for Windows: Software for Social Network Analysis. Harvard, MA, Analytic Technologies. Breiger, R. L. (2004). The Analysis of Social Networks. Handbook of Data Analysis. M. Hardy and A. Bryman. London, Sage Publications: 505-526. Bunderson, C. V. (1975). Team production of learner-controlled courseware. In Improving Instructional Productivity in Higher Education, S. A. Harrison and L. M. Stolurow, eds. pp. 91-111. Educational Technology, Englewood Cliffs, New Jersey. Dawson, S., E. McWilliam, et al. (2008). Teaching smarter: How mining ICT data can inform and improve learning and teaching practice. Ascilite 2008. Melbourne. Available at: http://www.ascilite.org.au/conferences/melbourne08/procs/dawson.pdf Last access: 02/02/2009. Dewey, J. (1938/1963). Experience and Education. New York: Collier. Dillenbourg, P., Baker, M., Blaye, A., & O'Malley, C. (1996). The evolution of research on collaborative learning. In E. Spada & P. Reiman (Eds.), Learning in Humans and Machine: Towards an interdisciplinary learning science. (pp. 189-211). Oxford: Elsevier Dron, J., & Anderson, T. (2007). Collectives, Networks and Groups in Social Software for ELearning. Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. Quebec. www.editlib.org/index.cfm/files/paper_26726.pdf. Hanneman, R. A. and M. Riddle (2005). Introduction to Social Network Methods. Riverside, CA, University of California, Riverside. Goleman, D. (2007). Social intelligence: The New Science of Human Relationships. New York: Bantam Book. Jeong, A. (2005). Methods and Tools for the Computational Analysis of Group Interaction and Argumentation in Asynchronous Online Discussions. Technology and Learning Symposium. New York, NY. JUNG (Java Universal Network/Graph Framework, http://jung.sourceforge.net/). 169
Kreijns, K., P. A. Kirschner, P. & Jochems, W. (2002). The Sociability of Computer-Supported Collaborative Learning Environments. Educational Technology & Society, 5(1). Laghos, A. (2007). Assessing the Evolution of Social Networks in e-Learning. Unpublished doctoral dissertation at the Centre for Human-Computer Interaction Design, City University, London. Lambropoulos, N. (2008). Tools and Evaluation Techniques for Collaborative E-Learning Communities. Unpublished doctoral dissertation at the Centre for Interactive Systems Engineering, London South Bank University. Available at: http://intelligentq.net/nikilambropoulos/?dl_id=19 . Last access 02/02/2009. Lave, J. and E. Wenger (1991). Situated Learning : Legitimate Peripheral Participation. Cambridge [England] ; New York, Cambridge University Press. Lorrain, F. P., & White, H. C. (1971). Structural Equivalence of individuals in social networks. Journal of Mathematical Sociology, 1, 49-80. Meyer, K. A. (2002). Quality in distance education: Focus on on-line learning. San Francisco, CA, Jossey-Bass. Moodle, n.d. http://docs.moodle.org/en/Philosophy deNooy, W., A. Mrvar, A. & Batagelj, V. (2005). Exploratory social network analysis with Pajek. Cambridge, Cambridge University Press. Preece, J. (2004). Etiquette, empathy and trust in communities of practice: Stepping-stones to social capital. Journal of Universal Computer Science, 10: 294-302. Short, J., Williams, E. & Christie, B. (1976). The Social Psychology of Telecommunications. London, U.K., John Wiley & Sons. Wiener, M. and A. Mehrabian (1968). Language within language: Immediacy, a channel in verbal communication. New York, Appleton-Century-Crofts. White, H. C., S. A. Boorman, S., & Breiger, R. (1976). Social structure from multiple networks: I. Blockmodels of roles and positions. American Journal of Sociology, 87: 517-547. vanMeer, E. (2003). PLATO: From Computer-Based Education to Corporate Social Responsibility. Iterations: An Interdisciplinary Journal of Software History. Vygotsky, L. S. (1981). The instrumental method in psychology. The concept of activity in Soviet psychology. J. Wertsch. Armonk, NY, Sharpe.
170