14 minute read
Development of a ZMG-axis
Alberti, Sandro; 16 June, 2002 [text17]
[While I work on the never-ending evaluation of Disneyland’s New California, I leave you this first step towards a virtual reality for Guadalajara; fresh from my first 2-day workshop ‘in-town’]
There are several ‘key’ reasons why one should not recreate physical reality in a virtual context:
1. The most basic: a physical reality already exists; it would seem to be a waste of time to create another one (of course, there are legitimate ‘modifications’ to this theme, primarily due to some type of ‘augmentation’: a recreated reality, for example, might allow one to visit certain places that are otherwise inaccessible (due to schedule, expense, physical limitations, etc.). However, such improvements are simply partial aspects of what could be a richer, more complex experience (quite different, in the end, from the ‘original’ physical reality).
2. Different fundamental laws rule the physical and virtual contexts. In the physical realm, there is a uni-directional gravitational attraction towards the core of planet Earth, material properties are practically unchangeable or very slow to change (such as the way in which a solid ice cube becomes liquid water in an almost imperceptible manner; of course, I refer here to ‘everyday’/ ‘objective’ changes, beyond direct/ authored changes), and rules are generally stable and understood/ expected. On the contrary, in a virtual reality there is no underlying gravity, sunlight, or electromagnetism, and objects are not fixed (they can change suddenly or continuously, from solid to transparent, from one color or shape to another, etc.) and the only ‘objective’/ stable elements are those determined by the creators of such realities.
3. Data and hyper-links. Apart from the augmented re-presentation of a physical presence, a virtual reality always forms part of a network of computers and databases (particularly if linked to the WWW). It would be ineffective (limited/ simple-minded) to focus only on the representation of the physical world, when such richness of datainformation can also be diffused (for example, in a virtual reality, a statue of the Roman goddess Minerva could appear as such, with its ‘physical’ stone body and helmet and shield, etc.; this, in order to enable users to experience spatial orientation and positioning within a certain plaza in Guadalajara; however, such a a virtual monument could also be employed to diffuse numerous data, such as its history, physical proportions, exact world location, socio-cultural meaning, etc.).
4. Risk. If one begins to recreate an ‘identical’ physical simulacrum within a virtual reality, one effectively diffuse all of its data via an accessible database. Access to this information can be employed by thieves to break into your Los Angeles home more easily (since they can easily find out about a certain tunnel or broken skylight all the way from San Diego), or by terrorists, of course.
5. Time. Everything is temporal in physical reality. In order to recreate it ‘faithfully’, it would need to be recreated continuously, in order to update new street decorations, new business openings, destruction, renewal, etc. This time could be better spent in investigating and applying new techniques to improve the access to data (as well as the automation of certain, important updates). Beyond this, the ‘true’ recreation of physical reality is as untenable as ‘truth’ itself (just imagine attempting to update every leaf of grass, every molecule,…; from the beginning, the exercise is ‘framed’).
Document-008 (‘Maps without End’) provides some additional information on the manner in which cities like New York and Los Angeles have decided to create their virtual representatives and the reasons why certain goals were (or were not) pursued.
The human capacity to acquire data via visual means is quite limited. We are able to distinguish color, texture, and form, from amongst an endless series of characteristics that compose the objects of our context. In the physical world, this is quite sufficient, since the focus is to provide a general representation of objects ‘in themselves’. But this limitation becomes an emphasized problem in certain virtual realities, where objects represent themselves as well as ‘linked’ data. What, separate characteristics can one employ in order to represent the embedded data?
A virtual reality cannot be ‘THE’ virtual reality (it cannot represent all existing data). Thus it is logical that various virtual realities must be present (each as a particular vision or documentation). The ZMG-axis has the possibility of presenting one of the most comprehensive representations in Mexico (and the world), since it continues to be developed ‘ahead of its time’ (virtual realities are not expected to overtake the popularity of 2d Web sites until the next decade). One must understand that collaboration in this project will enrich the representation beyond any potential allowed to a single author. Beyond this, any ‘administrative’ restrictions upon particular input or visions cannot be understood as ‘dead ends’, since anybody can proceed to develop his/her own virtual reality (in the end, we must be clear on this: the ZMG-axis is open to collaboration, but can only achieve clarity of representation through restrictions that make it a complex, rather than chaotic, system).
What data must one represent in the development of a virtual reality for Guadalajara? Well, probably those that focus on particular representations of that wondrous city. We might begin with what people are used to seeing (the ‘basic needs’; all that is already represented by city-relevant Web sites): timeline, historical events, customs/ rituals, food/drink, crafts… essentially, a tourist guide. Also, what ‘locals’ are willing to share (this, more typical of personal Web sites): poems, lifestyles, opinions, visions, local interpretations.
Before proceeding, I would like to clarify dome formal approaches that have been implemented in first-world, avant-garde practices, and that can inform the formation of a virtual reality for Guadalajara. Particularly, I would like to focus on the approaches implemented a t the ETH in Zurich, Switzerland (subject of various contemporary texts on the topic of Information Architecture; www.ethz.ch).
The use of the program ‘Sculptor’ allows for the creation of digital spaces via the intersection of positive and negative volumes (positives add and negatives subtract). The volumes are orthogonal and simple, which leads to a minimalist interpretation of spaces (one must remember that, in a digital context, it is best to reduce the numbers of object polygons/ facets). Overall, this approach leads to questions about the ways in which Guadalajara’s urbanism can be simplified, while maintaining a ‘legitimacy’ of representation.
In another course, the ‘X-Worlds’ program allowed students to create orthogonal spaces within cubic volumes. These were subsequently deformed in order to achieve a controlled complexity (via the use of the ‘worldForm’ tool). Deformation is a key process in digital programming. In the digital context (where visual forms are not fixed, since they merely represent mathematical data), ‘topology’ has become a key process (the multiple deformations of ‘form’; the evaluation of differences between forms and their deformations). This process, similar to that found at ‘Phase[x]’ (see below) is one of the methodologies for assigning ‘additional meanings’ to digital objects. Once created, an object is maintained as an ‘original essence’. The object can be assigned multiple deformations and/ or reformations, any of which can be superimposed or even eliminated (the original object always exists, for comparisons that can be linked to embedded data, and, in the case of the ZMG-axis, in order to arrive at a comparison point to physical reality). It could be said that, in the digital realm, quite Platonically, one always has access to what exists statically in the physical world, via ‘souls’/ ‘essences’, while topology could yield additional information (including diagrammatic influences/ effects also in the physical realm, or any of a series of external-linked data).
The landscape-object of the adjacent image was one of the winners of a particular course contest. Architecture students were assigned the development of a digital structure based on ‘basic building blocks’ that were to be created in VRML. These building blocks (basic geometric shapes) had to be of limited and simple types, and able to be connected seamlessly in at least 2 points ( a sort of 3d tiling pattern), in order to generate surfaces and volumes at a larger scale. This course reveals the importance of multiple connections and formal simplicity. If these forms were to be based on forms and proportions that already exist in a physical context, it is natural that a representative virtual reality would be generated, without recurring to high-bandwidth or loss of clarity.
‘Time-Space Annotations’ is a course in which the creation and programming of digital forms was explored (thus generating forms that would change over time). Based on a standard matrix and code, the students acted as ‘modifiers’, rather than ‘creators’ (note the similarity between the 3 images; all are based on the repetition of form within a field, and represent deformations at particular points). The choice to ‘modify’, rather than ‘create’ is specially pertinent to Guadalajara, a context in which many students don’t have access to the hardware or software necessary to create new digital forms.
‘Phase[x]’ is a very interesting course. It continues to be presented year after year at the ETH, and has reached quite a sophisticated level. It basically deals with the generation of objects that are deformed over time. The course is divided into phases. In the first phase, each student creates a digital object. In subsequent phases, these objects remain stored in a central database. Each student must modify other objects at subsequent phases (each phase represents a particular ‘deformation’ theme: texture, subtraction, rotation, envelope, data, fractals, lighting, etc.) One same object can be selected by various ‘authors’, but past the first week, nobody is allowed to touch his/her original object. This leads to the continued selection of the most interesting objects. A survival of the fittest that gives a score to each object, based on its number of offspring. This is a collaborative activity, where each individual’s collaborations are carefully tracked. At each phase, a student selects an object icon from a menu, which leads to a local manipulation of the 3d model that is then returned to the database. In the adjacent images, you may appreciate the evolution of an object between phases 5 and 7.
In ‘Fake.Space’, we are presented with an interesting way to link personal texts (experiences, poems, etc.). A system of interconnected nodes is developed, in which individual contributions are accessed and connected in various ways. In this particular course, each student has created a digital version of his/her own home (a well-known physical space that provides personal associations and memories). In the end, all these are combined within a mega narrative structure produced by multiple authors. This representation of personal interpretations takes into account the idea (already common since the end of the 19th century) that space (a central aspect of architectural discourse) is always subjective and with constantly changing meanings (the meaning of space changes with every change in context). ‘Space’ is not an absolute term. Rather, it is based on multiple mental constructions, valid in diverse contexts. Due to this, generalizations regarding spaces are not as important or interesting as personal observations within a particular context. In Fake.Space, this ‘contextual’ idea is applied by combining new and existing narratives (each narrative is based on the context of previous ones). One may connect in any manner, as long as the connections make sense. Up to 4 new narratives can develop from any existing one (and some don’t continue, since they are not interesting enough for other authors). Narrative threads continue in the form of rings, from a core outward.
Finally, for the moment, ‘Event Spaces’ presents itself as a game in which a network (similar to that found in Fake.Space) combines a series of interconnected events and architectural spaces. Due to the incorporation of programming procedures (events; an ‘if-then’ logic), this is not a simple narrative ring. It is much more like a personal interpretation of physical space. Here architecture and events are inseparable, complementing each other. In the Event Spaces game, students create scenes and also program 2 things: that which will occur within these scenes (events) and the way in which the scenes are accessed or left behind (alternations). The personal interpretation of spaces emphasizes aspects such as light and shadow, public and private, abstract/ concrete, home/ city). This lends itself to the development of a ZMG-axis as much as to the particular case study developed in the course: a surrealist Villa Savoye (Le Corbusier), where students, applying their individual attitudes, experiences, and images regarding the building, managed to reconsider and transgress modernist ideals of ‘space’. In this case, everyone involved was allowed to add new events and alternations to any scene (however, in order to maintain clarity/ coherence, any proposed modifications had to be accepted by the original creators of the scenes).
These few ‘Swiss’ courses already begin to reveal particular useful methodologies for the development of a ‘local’ virtual reality:
1. Digital objects/ forms should be simple and, at the same time, able to represent proportions, sizes, rhythms, standards found in the city’s physical infrastructure (simple in form; complex in meaning).
2. Apart from visual forms (which can be employed to create associations to the physical realm), one can employ the following to represent additional data: temporal changes (form, opacity, color, brightness, texture type and scale), sounds, and additional copies of the axis in other layers.
3. Collaboration is important. With appropriate credit provided to each contributor, it will allow everything that already is becoming commonplace in design practice: it eliminates cheating and lies necessary in a world of ego-individualist authors, takes into account ‘common’ types of creativity (accidents, inspiration, effects), allows for faster production, and propels the evolution of forms.
4. In the same way that it is not possible to perceive completely from one particular point of view, it is important to connect hundreds of individual mental impressions, in order to develop a complex space (such as a city). Personal narratives are as important as architectural representations of an urban space. By connecting them in a cumulative manner, one generates a filter that simultaneously reveals that which particularly involves the entire community of collaborators.
5. Besides allowing for the personalization of space (via the incorporation of personal ideas and expressions), it is important to include personal interpretations of the physical space (scenes and events).
In the end, these academic experiences help us establish rules of our own, although much remains to be defined, even at a basic level:
· What shape must this ZMG-axis take? Some of this already has been established. It is an axis that passes through the most representative zones of Guadalajara. It could consist of various layers (3 to 5, perhaps), each providing access to different data. One of the levels would be linked to a representation of physical reality. But, what of the other levels? What do they contain/ represent, particularly? How precisely do they represent?
· It would be interesting to allow users to deform objects. Would only locals be allowed to contribute in such a way? We would be able to, not only represent a local urbanism to the global community, but also the manner in which we, locally, approach ‘manipulation’. Of course, this would need to be carefully orchestrated. We must decide, first of all, on the level of coherence to be maintained (and how to do so). As described earlier, the maintenance of originals would seem to be an appropriate decision. However, we must still decide: How to deform? What to deform? Why deform?
· I would think we agree on the fact that it is important to present the personal visions and experiences of Guadalajara’s inhabitants, as part of the total database (or, maybe not; what do you think?). The presentation of personal narratives, linked to a narrative context, allows for the development/ appreciation of commonalities. And it is not necessary for less-popular ideas to perish (however, then, how do fading ideas become reinvigorated and reintegrated?). If these are to be based on progressive development from ‘original’ narratives, how do these ‘originals’ become determined?
· Apart from concerning ourselves with the representation of formal, physical space, and the personal data of its inhabitants, we should consider the personal uses or perceptions of the physical space. Personal interpretations of space certainly add a richness that can improve upon already ‘major’ spaces, as well as bring ‘minor’ interstitial spaces to the forefront (as an example of this, review document-002, in which the dark corners of Orange County are exposed). Would this be presented on a secondary layer? How would it connect to other layers?