THE
GRID
Acknowledgments Development of THE GRID would not have progressed without the support of Art Lubetz, Josh Bard, Dale Clifford, Kai Gutschow, Johnathan Kline, and Ramesh Krishnamurti. Also, many thanks go to Sam Sanders, Adam Lans, Mike Jeffers, Ben Finch, Yeliz Karadayi, Claire He, Paulina Reyes, Alex Fischer, and Emerson Stoldt. Last but not at all least, the support of my parents was invaluable.
Thesis Website - the--grid.tumblr.com Secondary Thesis Website - whatisthegrid.tumblr.com CMU SoArch Thesis Website - www.andrew.cmu.edu/course/48-509/ CMU SoArch Home - www.cmu.edu/architecture/ Contact: Yuriy Sountsov
1
vrigorgana@gmail.com Find me on LinkedIn!
Revision 10 - 5/8/2014
THe GRID Table
Fig. 0.1 QR code for the thesis website.
of
ConTenTs
Acknowledgments and Contacts
1
Table of Contents
2
Part One: THE GRID Compiles
6
Part Two: THE GRID Executes
74
Part Three: THE GRID Terminates
144
Fig. 0.2 QR code for the tutorials.
2
5
Real TIme 3D VIsualIzaTIon PaRT one - Table
of
ConTenTs
Table of Contents
6
Introduction
7
Interest and Architecture Moving Forward
7
Advisors and Primary Contacts
9
Project Brief
12
Project Methods and Timeline
17
Research
21 Precedents
21
Literary Research
25
Interviews and Reviews
37
Software Research
43
Hardware Research
55
Deliverables
57
Applications
57
Moving Forward - Software Package
61
Moving Forward - Benefits and Death
63
Moving Forward - Imagination and Experience
69
6
InTRoDuCTIon
InTeResT anD aRCHITeCTuRe moVInG foRwaRD I, Yuriy Sountsov, was interested in this project because I had the opportunity to give something to the field of architecture that it has struggled to have. During my last year at Carnegie Mellon University I had the time, resources, and commitment necessary to put forth a complete, developed, and forward-thinking project that others could take and use in their lives as designers and practitioners of architectural theory and thought. In the five years that I spent studying architecture at Carnegie Mellon University...I had seen the future. And it was a strange future, indeed. The world, reader, was on the brink of new and terrifying possibilities. But what was made available in my education was severely lacking. Architects spend too long learning tools that are obsolete by the time they find ways to teach those tools to new architects. What if the world could see inside the mind of the architect? What if the architect’s ideas did not travel a maze before becoming visible? Architects are ready to learn. One of the major aspects of an architectural thinker is that they are open to new ideas, new
7
societies of thought. Over the centuries, it has taken radical
Fig. 1.1 The eye provides the most powerful sense humans have: vision. Architecture is often a primarily visual profession - while many architects argue the tactile and auditory aspects of architecture are also very important, the experience always comes back to the appearance of a building. Therefore it should be of paramount importance to architects how they communicate the visuality of their designs, yet one of the most powerful tools in an architect’s arsenal, the computer, remains wholly unused.
thinking of the likes of Brunelleschi, Gaudi, and Candela to advance the field of architecture in great leaps and bounds, but it was not because they created things that had never been seen before but that they knew what was available and created what could be possible. The digital world is only the latest arena which is thus untapped. It has been exponentially growing for decades and the time is nigh for architects to seize the tools that await them...on THE GRID.
Fig. 1.2 Brunelleschi’s dome, a single combination of previously disparate concepts that allowed architecture to take a great leap forward.
8
aDVIsoRs
anD
PRImaRy ConTaCTs
Yuriy Sountsov - Yuriy Sountsov is a fifth year architecture student at Carnegie Mellon University. He is dissatisfied with the digital backwardness of the program he has been exposed to and wonders sometimes whether architects have become so desensitized to the creative world around them that they think they are on the cutting edge when in fact they are on the cutting block. He has experience with various digital design software, various video game engines, has seen many films and has explored film technology. He sees a problem in architectural practice and wishes to contribute his time and energy for free to fix it. Arthur Lubetz - Arthur Lubetz is an Adjunct Professor in the School of Architecture. He brings a theoretical mindset, a creative Fig. 1.3 My Fall 2010 studio project that Art Lubetz critiqued and reviewed.
framework, and a rigorous approach. He is also the fall semester instructor. I have not collaborated with Arthur before though he once taught a parallel studio. One of Arthur’s key driving principles is the inclusion of the body in architecture. This relates closely to my thesis.
9
Dale Clifford - Dale Clifford is an is an Assistant Professor in the School of Architecture. He has significant background finding simple solutions to complex problems using media not native to the problem. I have had Dale in two previous classes, Materials and Assembly and BioLogic, both of which involved combining disparate systems of assembly to achieve a goal not easily or impossibly reached by any constituent system. Dale may also provide many connections into digital fabrication practices. Joshua Bard - Joshua Bard is an Assistant Professor in the School of Architecture. He should contribute some digital and media expertise. He was be the spring semester instructor. Joshua co-taught a fall course, Parametric Modeling (the other instructor being Ramesh Krishnamurti) that focuses on integrating a software with Rhinoceros, Grasshopper, although that software is built inside Rhinoceros as a plugin. Joshua may help with adapting other software. Ramesh Krishnamurti - Ramesh Krishnamurti is a Professor in the School of Architecture. He should contextualize my thesis due to his background studying computer visualization and vision. He taught a course I took in the spring, Parametric Modeling. I have worked as a Teaching Assistant with him for the class Descriptive Geometry for a few years. He is also a great thinker - he may help me work out the nature of my thesis and any kinks it might have.
Fig. 1.4 Samples of work made in Materials and Assembly (MnA), BioLogic, and Parametric Modeling. Top to bottom: The MnA enclosure made with zip ties; A responsive wall using nitinol; Parametrically defined surfaces.
10
Varvara Toulkeridou - Varvara Toulkeridou is a graduate student in the School of Architecture. I have worked with her while being a Teaching Assistant for Descriptive Geometry under Ramesh. As she has a similar background and knowledge to Ramesh, she may be another useful source of advice and critique. She is also currently a Teaching Assistant in the Parametric Modeling course that I am taking, making her available weekly should I have specific questions I need to ask her. Kai Gutschow - Kai Gutschow is an Associate Professor in the School of Architecture and was the fall and spring thesis coordinator. He developed the program as it ran, and managed all of the students’ time and projects. He coordinated with Johnathan Kline and Mary-Lou Arscott, the Associate Head, to prepare the final presentation location at the Miller Gallery. Johnathan Kline - Johnathan Kline helped Kai during the spring. He established a more lax schedule for the spring semester compared to the fall semester. He also organized some of the early meetings and reviews the thesis students participated in early in the spring semester.
11
PRojeCT bRIef The architectural render has long been the pinnacle of drawn design - a constructed image that shows the viewer an idealized view of an architectural project from a specific location within the project at a specific time of day. Traditionally, the architect’s primary tool for image-making was a drafting board. Some time in the last few decades architects have adopted the computer to serve the same role yet advance it in many ways, making the digital render an evolution over what was possible with drafting. Yet, despite the apparent approach towards a visual quality near
Fig. 1.5 The complete toolset in Rhinoceros for animations.
that of human sight, the digital render failed to fully use the full power of a computer. The digital render took a horse cart and made it into an automobile but failed to then also make a van, a truck, or even a race car. The allure of a digital world has fascinated people ever since computers were able to create early vector and later raster graphics. The idea has been explored in such films as Tron (1982) and The Matrix (1999) and more recently in Avatar (2009), where over half of the film was photorealistic computer effects, as well as hundreds of student or collegiate art projects. It has led to the development of hardware to augment the human frame, extending what the human mind is limited to by the body. Digitally
Fig. 1.6 Diagram created by a developer of Brigade 3, a cutting edge path renderer, made by OTOY, the same people behind Octane. It posits that, after a certain amount of geometric detail, ray tracing always beats raster meshes.
12
fabricated films have gradually replaced hand-drawn films and have even entered the mainstream as a respected category of film. Architectural designers have tapped this field, but not as fully as they could have. Another way the digital world has entered the social consciousness is through video games. While not all video games involve a 3D virtual environment, the ones that do often go for a highly photorealistic portrayal of a digital environment. The tools video game designers use are often made specifically to quickly develop virtual environments. Students have often tried to use such tools in their projects, but although they tended to gain success architectural firms have rarely followed suit. It is true that video game designers create objects that are Fig. 1.7. Three approaches to my thesis. Top to bottom: taking a render, creating many renders from it, then showing them together as an animated sequence under the control of the viewer, faster than just a series of renders; The render and the model are combined into a visual system whereby the user can explore the model in a virtual world, allowing her or him to share the model with anyone; With a real time render the concept of presence comes into play, since a moving realistic image allows the viewer to inhabit the image.
meant for mass production, and film companies make objects meant for mass exposure. This kind of thinking dodges the aim of my thesis though, because I am not proposing architecture become video game-like or film-like. I am proposing it use the tools they use maximize digital communication. ++++++++++ The thesis is a field produced by two axes - the vertical axis is that of architectural image-making: how have designers evolved their tools to match current technological advances; the
13
horizontal axis is that of digital interfaces and interaction: more
and more society is finding ways to interconnect with itself - such interaction in architecture, a field entirely involved in the business of being around others, seems largely absent or unused. The first axis, visualization: While many designers in the field have advanced the static render into something more dynamic, making videos or flythroughs
Fig. 1.8 Is there a possibility here?
or virtual habitats, more often than not these cases were onetime gimmicks and have not established as a versatile aspect of architectural design. The second axis, interaction: The concept of digital interaction has often been explored by artists trying to cope with the digital frontier yet the possibility of delivering an architectural project with extra-sensory exposure does not seem to have gained traction among architectural designers, even though technology exists to allow interaction beyond that which is seen or heard. The project, therefore, is to explore and define the extent of such efforts in both directions, identify what was tried, what failed, and how those attempts could be improved, identify the
Fig. 1.9 THE GRID. Neither interaction nor visualization alone will achieve any greatness; it is through the collaboration of the two axes that a far greater advancement can evolve.
best candidates (by an evolving criteria as the project develops) for a concentrated push into versatility, and produce a working example of the next evolution of drafting.
14
The primary deliverable will be a software package which parallels or replaces the point in design when a designer of architecture would make a static render and, instead of producing a mere digital render, would create an interactive simulation serving as proof of experience much like an architectural model is a proof of assembly. A distinction has to be made between a pre-rendered animation and a real time interactive environment. While prerendered animation is a side-effect of this under-utilized function of computers, it is absolutely a rut of possibility. It is a linear Fig. 1.10 An example of a virtual environment that can be explored. It is both dynamic and interactive - it goes beyond what a set of renders could have done and also gives the user something a render could never have - a sense of presence in the project.
evolution of a digital render - why stop there when a render can evolve planarly? ++++++++++ A breakdown of the thesis into one sentence, three short sentences, and a short paragraph is a useful tool for understanding the thesis: 1: To Seek a Means and the Benefits of a System to Interact in Rendered Real Time With Digital Models. 3: Such a system would provide architects and clients a preview of the visual and aural aspects of a building in their entirety before the building is built. Much like how a physical
15
model is a proof of assembly this would be a proof of experience. So what?
9: Architects traditionally make analog products - visual stimuli that mimic the rays of light that true sight gives. For presentations (renders) and data analysis (orthographics), these products are nearly always static images. Yet, much of architectural design requires the input of a user’s movement to activate. No static image will ever describe to the designer the experience of natural movement within a project. Without an interactive experience to iterate from, the final, built, experience cannot be prototyped. Interpreting a static image requires a skill called mental rotation that is learned through studies of descriptive geometry, long exposure to architectural orthographics, and CAD. Mental rotation is a skill not every client has and not every architect develops fully. Without this skill static images become severely lacking because too much of the design process relies on interpreting these images with the aim of improving the design. Opportunities exist to replace or compliment static images with real time renders that closely resemble the built design both experientially and conceptually, which would allow a more indepth design pipeline.
16
PRojeCT meTHoDs
OctOber
September
Sep. 3 - Version 2 of Thesis
17
R R R R R R R
Sep. 9 - Version 3 of Thesis, focus on methods
D D D D D E D E
Sep. 16 - Version 4 of Thesis, expand on all sections
Sep. 18 - Version 5 of Thesis, presented as a poster
Oct. 21 - Version 6, review
TImelIne
Research - the first step of the thesis was to generate a foundation of knowledge in the field of visualization and architectural visualization in particular. The thesis combines several schools of thought - Representation, Automation through Technology, Simulation, Video Gaming, Interfaces, and, naturally, Architecture. Each field contains several informative areas: History, Technology, Application or Practice. These areas informed what was available in the field as well as dictated possible constraints. For a broad spectrum I expected at least six established literary sources and six other collateral sources (videos, talks, examples of work). Definition - in the meantime, I continued to refine the
Oct. 4 - List of deliverables
Oct. 18 - Midsemester break
anD
grounds of my thesis - the product, the deliverable, is a tool. The means is often more important than the end because the means is inherently repeatable. The research molded the form and function of the thesis and its ultimate deliverable, a visualization tool.
OctOber
Experimentation and Evaluation - the second step was an exhaustive analysis of existing visualization software (or hardware, if it is available through CMU) for the purpose of design (NOT making a final product but as another step, or a better step, in an iterative process). This involved its own research on what for time-based deliverables and subjectively evaluate them based on those deliverables. Following research on what tools practicing architects used, I performed research on tools students have used, what artists of various caliber have used, and video game engines. While the time each visualization tool takes to render (from hours per frame to frames per second) is crucial, I also
NOvember
tools architecture firms have used in the past (and documented)
looked for other design features, keeping the root of my thesis in mind - the possibility for the digital real time. Theoretically this research could have come across examples of work, but the focus
December
was on how those were made, not what they were.
R R R R R R R R R
D D D D D D D D
E E E E E E E E E
Nov. 28 - Thanksgiving
Dec. 8 - Review of thesis development
Dec. 13 - Submittal of thesis book Dec. 16 - Last day of first semester
18
E D E C D E C D C D C C C C
February
JaNuary
Jan. 13 - First day of second semester
Jan. 20 - MLK Day, no classes
Mar. 5 - Midsemester thesis review
19
rch
Mar. 7 - Spring Break starts
Compilation - The two threads of research combined. At this point I planned to have a steel-hard definition of my thesis. There would have been at least two deliverables, one for each body of research. The literary deliverable would have been an opinion piece drawing from all the sources I compiled that projected the possibility (that I believe is the case) of what architects could embrace in the field of visualization given the power of computers and what effect it would have on current design paradigms. This opinion piece would have predict the possibility of the second deliverable. The software deliverable will be a proof of concept or a redistributable software package (depending on if the software I end up choosing is licensed for educational use or distribution). This software package would have supported the opinion in the first deliverable, ultimately proving architects can evolve the render into something that interacts on a level above the visual or tactile. The software package would have addressed the range of interactivity that is missing in architectural delivery. Depending on what software I used, there would have always been a way for both the client and the designer to enrich their communication. The software package was, necessarily, an all digital item, as having a video or a screenshot of it defeats the point of interaction.
march Beyond - if there were yet more time I may have develop more deliverables to parallel the two main deliverables in the Compilation step. One would have been a documentation on the use of the software package and tool. A certain amount of basic
Apr. 10 - No classes for Carnival
tutorials smoothening the learning curve would already have much of the tool would have been difficult to approach for a new user. If there were time I could have developed detailed explanations of various functions within the software package.
Apr. 13 - Carnival ends
april
been part of the software deliverable, but, like any software,
Importantly, this would have heavily depended on the nature of the software package. If it were a video game engine editor then
Apr. 25 - Final Presentation
there may only have been a small handful.
may
it may have grown to have dozens of tutorials. If it were a small utility (perhaps an architectural firm has developed one), then
C C C C CB
Mar. 17 - Spring Break ends
May 2 - Last day of classes
B B
May 12 - Thesis due
20
ReseaRCH PReCeDenTs Precedents are difficult to find because the bulk of professional architectural animation focuses on pre-rendered scenes. The videos that are produced by companies that focus on this kind of animation are often flythoughs or disembodied gliding camera views moving through completed designs, either as part of a submission to a competition or after the design was built. The short Wikipedia page on architectural animation mentions how difficult it is to render animations and how firms rarely have access to the hardware or tools to assemble such Fig. 2.1 Arch Virtual’s web version of one of their projects.
products. However it also mentions that, more and more, firms are recognizing that animations are better at conveying the ideas of a project than design diagrams. Otherwise, there seems to be little effort anywhere to document the most effective animations or even any attempts at real time interaction with animation. Two companies exist that have begun using game-like software to create virtual versions of architectural projects. Both focus on Unity3D and create services ranging from training simulations to marketing packages. Both companies have
21
harnessed Unity3D’s ability to work cross-platform as well as its ability to efficiently handle a complex scene with pre-computed
shadows and materials. The first company, Real Visual, focuses on a high quality of delivery in simulations, training, marketing, and outsourced Fig. 2.2 Real Visual’s logo.
design work. They cover work in various multi-national sectors aside from architecture - energy, transport, and defense. This displays flexibility and expandability, and shows how such technology and its application are quickly burgeoning in the wider world. They work closely with the developers of Unity3D to ensure the software is as cutting edge as possible. If architects could learn from the technical expertise of this company then the
Fig. 2.4 A deliverable from Real Visual on a mobile platform.
field would only be enriched. The second company, Arch Virtual, focuses more on cutting Fig. 2.3 Arch Virtual’s logo.
edge hardware and integrating it with Unity3D. They have worked with the Oculus Rift, a virtual reality headset currently in development, bringing in projects developed in Unity3D, that are also configured to work on mobile platforms like those of Real Visual, and setting them up to work with the headset. They also have an ebooklet detailing the steps required to create an architectural project within Unity3D. This booklet is a step in the right direction for the profession, but it is by far not enough, as
Fig. 2.5 Arch Virtual’s Unity3D booklet and their application of the Oculus RIft virtual reality headset.
at 65 pages it is only a set of guidelines rather than thorough educational materials. Autodesk also has software designed for the purpose of accelerating architectural animation. I mention this not because
22
it is an effective precedent for my thesis but because it is exactly the wrong approach - it does not use a human viewpoint, it does not offer a high level of realism in its graphics, and it favors presentation over interaction. This software, Autodesk Showcase, takes models and allows the user to dress them up, applying materials and environments to the scene. It offers various alternate rendering types, like cartoon or sketched, as well as options for sets of materials to be shown by themselves. The workflow is one of setting up renders or animations with a preview viewport and then rendering them, akin to what a full screen Vray would look like. The biggest drawback that I perceive in this software is, Fig. 2.6 Autodesk Showcase screenshots. Clockwise from top left: Regular preview view; Cartoon preview; Different material sets; Publishing, or rendering, an image.
despite its effort to offer architects a more intuitive rendering solution, that it fails to advance the field. It is an example of stagnation: nothing in it is radically new over what is already possible in AutoCAD, Maya, 3DSMax, or Rhinoceros with Vray. It is a horizontal advancement and fails to use advanced rendering methods, new interaction methods, or take advantage of newer hardware. ++++++++++ It is also important to note video game graphics precedents. There is a stigma within the commercial culture today that
23
video games and their technology are beneath professionals
Fig. 2.7 Autodesk Showcase logo.
and their interests. While it is true that the gameplay aspects of video games have little bearing in most professional fields, the technology and simulation aspects behind video games are, by now, entirely applicable in other fields. (There are also such Fig. 2.8 QR code for a demo video of CryEngine.
things as GWAPs, games with a purpose, video games designed specifically to be training materials and high-fidelity simulations of real-world scenarios). For the purposes of my thesis I will argue that the graphics advancements of video games have, over the past several years, reached such high levels of realism, among the video games that
Fig. 2.9 QR code for a demo video of the Fox Engine.
use cutting edge engines, that they contend with professional
Fig. 2.10 Super Mario 64, not an example of a contemporary video game.
rendering software in terms of speed, quality, and production value. Modern video game engines generate lighting and shadows dynamically, meaning there is no pre-computation except that which is necessary to place the geometry into the scene. For materiality many games still use shaders, simplifying computation and sacrificing some real time effects, but some engines have begun developing real time shader effects, namely refraction and reflection. As for simulation, all video games with a first person perspective already have immersive interaction and exploration, key features for visualization that are lacking in professional software packages.
Fig. 2.11 Crysis 2, an example of a contemporary video game using realistic graphics.
24
lITeRaRy ReseaRCH The literature took up the bulk of the work during the first half of the first semester after the thesis program got started. The literature review pulled from over 30 sources more than a third that provided valuable insight into the context of my thesis. This proved to peers that this is an academic subject and bears worth in the field of architecture. The very shadowy nature of the subject of my thesis is exactly why I am proposing my thesis - to raise awareness of what can be done with modern tools. I also created a mind map, open to my advisors to flesh out, as I continue to insert data siblings and children. The mind map charts everything in the field of computing that could relate to my thesis - it is an attempt to contextualize my work, to bring it from computer science to a position that is understandable by architects. The semantics of my thesis automatically raise various Fig. 2.12 Mind Map as of October 21st, 2013. My thesis subject area is in the top left corner.
stigmas in readers or reviewers, so having a way to visually place my thesis among other academic subjects is important. Ideally, any interaction with computers that architects could have should have a spot on this mind map and right now my thesis only occupies a small portion of it. But one of the points of my
25
thesis is that this should not be so. Interactive visualization can be a powerful ally in developing a design, and by expanding that field
Fig. 2.13 QR code for the mind map.
architects could learn more powerful, more flexible tools. Following is literature research, a review of several sources that bring up important points for visuality and rendering as they apply to architecture:
This is significant as a historical precedent on the type of interactive software that my thesis belongs to. Sketchpad, Ivan
Visual Digital Culture: Surface Play
Sutherland’s own thesis, was the grandfather of CAD modeling.
and Spectacle in New Media Genres
While it crucially combined hardware and software, within the
This book by Andrew Darley explored visuality and spectacle in digital media. I drew parallels in it with architecture with how early digital modeling was show-driven - digital rendering is Fig. 2.14 Visual Digital Culture: Surface Play and Spectacle in New Media Genres cover.
used in the design of car bodies - a precursor of current CAD/CAM (Computer Aided Design/Computer Aided Manufacture) systems. And by 1963, computer generated wire-frame animation films -visual simulations of scientific and technical ideas -were being produced using the early vector display technique.” - pg. 12
often about what a project could be like and not what it is. The greater definition of model, to simulate, comes into context, showing how lacking static renders are. It also showed how video game software could be photorealistic, an important point that I must continually clarify. The illusion and wanting to be fooled, repetition and customization, the sense of occupancy and a comparison between video games and virtual environments rounds out the content of the book. Relevant quotes in textual order: “A key example of such research was that into real-time interactive computer graphics. This came to practical fruition in 1963 in a system called Sketchpad, which allowed a user to draw directly on to a cathode display screen with a ‘light-pen’ and then to modify or ‘tidy-up’ the geometrical image possibilities so obtained with a keyboard. Though extremely primitive by today’s standards, Sketchpad is viewed as a crucial breakthrough from which have sprung most of the later technical developments in the areas of socalled ‘paint’ and interactive graphics systems. By the mid -1960s, a similar system involving computer image modification was being
realm of modern software and interface systems my thesis does not have to have the same intertwined nature. Ideally my thesis should be able to do everything with a keyboard and mouse, however exploration into alternative hardware input is possible. The point is to separate the tangential, relatively, development of software like REVIT and AutoCAD from this original thread. “The desire on the part of scientists to model or simulate physical processes and events in space (and time) was a central impulse in the production of the earliest computer graphics and films. Whilst concurrent with the initiation of applied forms, work was under way on computer produced figurative imagery as a research activity in its own right. Even the work conducted in collaboration with artists had a decided leaning towards more figurative kinds of imagery. At the end of the 1960s experimentation began into the production of algorithms for the production and manipulation of still, line-based figurative images.” - pg. 14
The notion that early computer graphics were, in a way, show-driven, relates well to how architects do things with technology. Architects often use computers and rendering to show what a project could be like, as opposed to showing what it actually is. The original scientific drive to model, however, encompasses more than just showing the project itself, but also showing what the project could do. Here the greater definition
26
of ‘model’ applies, in that ‘to model’ means ‘to simulate’, where find an upper bound in software capabilities. various possibilities enter the game and a static representation becomes lacking. “The one that came to discursive prominence within computer image research and practice is perhaps the one with which we are all most familiar. Quite simply it turns upon the notion of the proximate or accurate image: the ‘realisticness’ or resemblance of an image to the phenomenal everyday world that we perceive and experience (partially) through sight. For the majority of those involved with digital imaging at the time, the yardstick of such verisimilitude was photographic and cinematographic imagery.” - pg. 17
“A technical problem - the concrete possibility of achieving ‘photography’ by digital means - begins to take over, and to determine the aesthetics of certain modes of contemporary visual culture. Attempts - such as those focused upon here - to imitate and simulate, are at the farthest remove from traditional notions of representation. They displace and demote questions of reference and meaning (or signification) substituting instead a preoccupation with means and the image (the signifier itself) as a site or object of fascination: a kind of collapsing of aesthetic concerns into the search for a solution to a technical problem.” pg. 88
This is the other side of the problem. Attempting to focus
This is another thing to keep in mind, while my thesis may too much on the signified versus the signifier may break the include video game software an important benchmark is that I relation of the image to the model or what it is modeling. The do not sacrifice photorealism. I am mentioning this because one effort to produce a visually realistic image moves too far from the aspect of my thesis is that it takes several steps forward, and very
ideal that the task of creating the image in the first place started
few, if any, back.
off from - in visual representation that ideal is to show truthfully
“In this case, of course, the set is virtual or latent - itself a simulation created and existing in the program of a computer. Such programs are now able to simulate three dimensional spatial and temporal conditions, natural and artificial lighting conditions and effects, surface textures, the full spectrum of colours, solidity and weight, the movement of objects and, as well, the complete range of movements of a camera within and around their virtual space. When cartoon characters - and, just as important, cartoon tropes such as anthropomorphism - are imaged through this studio simulacrum, then new registers of mimetic imagery are achieved within the cartoon: a consequence of this peculiar crossing or fusing of traditionally distinct forms of film.” - pg. 85
my thesis that ideal is to show a model experientially - through space and time.
design because the digital in architecture is merely a step in the
“This involves surface or descriptive accuracy: naturalism. At the same time as distinguishing itself as other (alien) in relation to the human characters and the fictional world, the pseudopod must appear as indistinguishable at the level of representation, that is to say in its representational effect. It had to appear to occupy - to be ontologically coextensive with - the same profilmic space as the human actors. This involved the seamless combining of two differently realised sets of realistic imagery: of which one is properly analogical, i.e. photographic, the other seemingly photographic, i.e. digital simulation. Additionally however, it must also integrate, again in a perfectly seamless manner, into the diegetic dimension: the story space. In order for this to occur an exceptional amount of pre-planning had to enter into the carefully orchestrated decoupage that eventually stitches the shots together. Here, finally, surface accuracy is subordinated to the rather different codes of narrative illusionism.” - pg. 108
development, seeing what is possible in the field will allow me to
Here the author was analyzing a scene from the film The
A parallel discipline to my thesis is digital film animation. With digital film animation, the software technology is, by necessity, highly configurable and allows total control of a virtual scene. While such control is not applicable to architectural
27
what the virtual environment looks like, and in architecture and
Abyss (1989) where a computer generated tentacle is made to
are aesthetic bounds - software cannot be so focused on being
coexist within the filmic space with the real characters and setting realistic that the realism gets in the way of the representation. and also within the presentational space, where the story as shot has to make room for this element which will be added later in the production of the film. The importance of this is again that the purpose of a render, or real time interaction, is not the pretty image itself but what the image does, its performative element. The quality and the believability of the frame in a film example has to kneel to the frame as a narrative element - this tentacle in The Abyss has to make sense as a tentacle first, the image of a tentacle later. Likewise in architectural representation, an image of a project has to come after what the image will do, which is a proof of experience. “The contradiction - ever present in special effects - between knowing that one is being tricked and still submitting to the illusory effect is operative here. Yet, particularly (though certainly not solely) in those scenes involving computer imaging discussed here, the more photographically perfect or convincing the images, the more - paradoxically - does their sutured and suturing aspect seem to recede and their fabricated character come to the fore.” - pg. 113
This pertains to the effect of illusion and wanting to be fooled. Sometimes a fabricated image, a computer generated mosaic, becomes too artificial. This is important to note because it is possible that so much effort can be spent on making an architectural image perfect photographically that its photorealism eclipses its narrative - its experiential conduit. Just like there are technological functionality bounds - software exists that can do many, perhaps too many, things in a virtual environment - there
“It is both the bizarre and impossible nature of that which is represented and its thoroughly analogical character (simulation of the photographic), that fascinates, produces in the viewer a ‘double-take’ and makes him or her want to see it again, both to wonder at its portrayal and to wonder about ‘just how it was done’.” - pg. 115
This, on the other hand, produces a lower bound on the aesthetics of the image. It is likewise cautionary to make an image too experiential, too generative of wonder. The combination of seemingly impossible imagery rendered (by computer) with accurate realism, so to say, produces a kind of inquisitiveness that places the generation of the image itself before what the image represents. The way the image was made becomes more interesting than what the image is about. “Thus the fact that we can make many identical copies (prints) of a particular film, means not only that more people get to see it but also that as a work it is thereby made less precious.” - pg. 125
This passage refers to Walter Benjamin’s theories on mechanical reproduction. It is always a good idea to keep in mind the fact that quantity, even if it maintains quality, does not necessarily increase the popularity of a work. Since a part of my thesis is to explore if architectural simulations can become portable, it will be important to see what effects such mobile qualities have on architectural design. “today it is not what is repeated between given tokens of a series that counts for spectators, so much as the increasingly minimal differences in the way this is achieved. Burgeoning ‘replication’, the repetition at the heart of commodity culture, forestalls the threat of saturation and exhaustion by nurturing a homeopathic-
28
like principle of formal variation (i.e. based on infinitesimal modifications and changes).” - pg. 127
The issue of repetition versus customization further explores what architectural representation could become in a mass mobile environment. This particular passage refers to the phenomenon of television shows, comic strips, and serial novels where only small changes are made between versions, only enough so that a new installment is different from the last. Theoretically the proliferation of architectural representation into the mainstream could go this way - an architectural firm produces an interactive architectural simulation or a few, and a client modifies it only slightly. Perhaps that is an unideal future. “Even fields such as computer games and simulation rides, which are the most recent and appear to depend more on the novelty of the technology itself, are - as we shall see in coming pages- just as much subject to this aesthetic of repetition. They may involve new formal elements - the much vaunted ‘interactivity’ and ‘immersion’, for example - and these may well affect their individual aesthetics. However, just as much as the more established forms, they also seem destined to operate within the logic of self-referentiality and the preponderance of the ‘depthless image’. All are manifestations of an altogether new dimension of formal concerns that established itself within the mass cultural domain of the late twentieth century, helping to constitute both cultural forms and practices of production and aesthetic sensibilities.” - pg. 129
Here the author combined the two threads of thought repetition of the image in culture and a focus on the image itself over the substance of the image. The idea here is that as an image spreads it does not necessarily mean that people see it more, or see through it more. The proliferation of an image may shift the
29
audience’s concern towards the formal quality of the image, put another way, more people see less. Being able to have a large
audience for an image may be a large factor - in an architectural firm and with a client only a small number of people see the image and can control it - once such limitations are lifted, if they can be lifted, the image may be diluted even if it gains other properties, like interactivity. “Living in cultures in which we are surrounded on all sides by moving images, we are now particularly accustomed to the kind of montage that strives to hide its artifice.” - pg. 131
Architecture is, independent of what some architects think, part of the global digital stage and as such has to compete with other visual fields. The more graphically advanced the rest of our culture becomes, the more certain qualities will be expected of the visual elements of architecture. This means that fleshing out this aspect of architecture, or at least exploring it in my thesis, will also require me to know what is expected of real time interaction as well as what it can do. “The sheer sense of presence, however, conveyed in the best of them - and here Quake is a key example - compensates for such defeats. In other words, it is the experience of vicarious kinaesthesia itself that counts here: the impression of controlling events that are taking place in the present.” - pg. 157
Here the author brings in the experience of video games, saying how, in the interaction with the game, the fact that the player may sometimes need to repeat areas in a video game is overshadowed by the fundamental fact that the player is actually controlling something in the virtual realm. This is an aspect of real time interactive simulations that needs to be put in the forefront because it simply does not exist in renders or even CAD programs.
There is no sense of time in Revit or Sketchup, and watching an a conceptual framework is important for the field of my thesis. animation gives the user no control. While substance is key in the image, presence is important outside it. “interactive representation involves a mode of representing that is ‘inside the time of the situation being described’. That is to say, time is represented as viewed from a first person perspective - literally as if one were really there, thereby, producing the impression that things are continually open to any possibility... Indeed, it becomes difficult to untangle space from time in this respect so intimate is their relation. We might say that the illusion of experiencing events as if they are taking place in present time in computer games is largely dependent upon visual simulation.” - pg. 158
“However, such ‘active participation’ should not be confused with increased semantic engagement. On the contrary, the kinds of mental processes that games solicit are largely instrumental and/ or reactive in character. As I suggest above, the space for reading or meaning-making in the traditional sense is radically reduced in computer games and simulation rides.” - pg. 164
Here the author steps back and concedes that the actual interaction with a video game is not the same thing as interaction with the virtual environment. The user is still fundamentally looking at an image. This is also very important to keep in mind
Here the author points out that the mere introduction of because my thesis does not seek to redefine how architecture time to a virtual environment already creates the impression of
is made - it seeks to augment or improve only the computer
interaction by the simple virtue of providing limitless possibilities representation aspect of architecture. on ‘what could happen next.’ In video games, the visual alone can do this. Likewise in my thesis, establishing this effect by the photorealistic representation of architectural models could already be a huge step towards interaction.
Fig. 2.15 Generating Three-dimensional Building Models From Two-dimensional Architectural Plans cover.
“given the increasing surface realism of the moving imagery, the sophistication of real-time graphic representation and the use of first-person perspective, the impression of actual occupancy and agency within the space of the game’s fictional world can be extremely convincing.” - pg. 163
Another aspect of video games that can be transferred to interactive architectural simulation is the sense of occupancy. Through a combination of realistic imagery, realistic depth (material effects and believability of presence), and a simulation of what it would be like as if one was there, occupancy can be achieved. Since occupancy is a major aspect of experience, such
Generating Three-dimensional Building Models From Two-dimensional Architectural Plans The only relevant quote: “The building model used to develop and demonstrate the system was produced by iteratively applying “clean-up” algorithms and user interaction to convert a grossly inadequate 3D AutoCAD wire-frame model of Soda Hall (then in the design stages) into a complete polyhedral model with correct face intersections and orientations. The Berkeley UniGrafix format was used to describe the geometry of the building, because of its compatibility with the modeling and rendering tools available within the group. The interior of the building, including furniture and light fixtures, was modeled by hand, through instancing of 3D models of those objects. In all, the creation of the detailed Soda Hall model required two person-years of effort. It became clear that better modeling systems were needed.” - pg. 3
While the research report, by Rick Lewis, was written in 1996, before significant advances in CAD had taken root among the designing audience, the general gist of what this quote refers
30
to remains true today. With my thesis this argument would more
need to open their mind to the notion that architecture is not
pertain to having to customize every render for a flawless end
narrative by default.
result (presumably), the notion that accurately modeling an entire building in a computer is manually labor intensive is true - partly because many designs are so unique, there are no tools for efficiently spreading geometric complexity within a model without resorting to grids or simple patterns. With rendering and interaction, the manual difficulty lies in preparing a render scene and then setting lighting and material properties, all of which take a large percentage of the total time it takes to develop a render. Perhaps there is a way to develop a pipeline where materials and lighting can be more easily established without thinking of it as a necessary preparation for each render scene. Visuality for Architects: Architectural Creativity and Modern Theories of Perception and Imagination This book by Branko Mitrović introduced an idea to my thesis: mental rotation, the ability to rotate a 2D representation in the mind. It bashed architects for blindly relying on narrative as the prime way of communicating projects and designs. It proposes that architecture evolve into a visual profession. Generally, it noted a behavior in architects to avoid or ignore architecture’s purely visual aspects. The idea of ideological bias versus the opportunity to see architecture visually is critical to expanding
31
the use of interactive media in architecture, yet architects first
Relevant quotes in textual order: “What psychologists describe as mental rotation is the same kind of task that is performed by computers in modern architectural practice.” - pg. 6
This book argued that what CAD does is not fundamentally different from what a human brain does when it views a plan or a perspectival image - though the separation of conceptual thinking from visual thinking becomes easier in a computer. Thus relying on creating static images just so the brain can be forced to have visual and conceptual thinking near each other, forcing connections, is a fairly outdated concept - the process can be separated, CAD can give the full visual stimulus that real experience provides with a real building and the brain can be fully used for conceptual thinking. “The same tendency to base design on stories that can be told about architectural works is common in contemporary architectural practice as well. Here it is strengthened by the fact that in order to get commissions, architects often have to explain in words their design decisions to their clients. Sometimes they (are expected to) invent stories about what the building represents.” - pg. 11
Another key theme the book brought up was the stubborn reliance of contemporary architects on narrative and having, or thinking that it is the only way to, describe a building’s ‘concept.’ Why rely on speaking about an almost inherently visual idea (granted, tactility and sound matter) when you can communicate it visually?
Fig. 2.16 Visuality for Architects: Architectural Creativity and Modern Theories of Perception and Imagination cover.
“In fact, much bigger issues are at stake. Architecture does not live in isolation from its intellectual and cultural environment. If antivisual biases are going to be credible among architects, architectural academics, or theorists, this can happen only if such views are based on and derive from assumptions that are credible in the society in which they live.” - pg. 13
almost exclusively visually: computer screens, smartphones,
“If we are going to talk about the aesthetic qualities of architectural works, we need to be aware that these works are going to be thought about not only as perceived from a single point in space but as three-dimensional objects. We perceive a building from one side, from another, from inside, we observe the composition of spaces, and after some time we have formed a comprehensive understanding of the building’s three-dimensionality. Or, we don’t have to be dealing with a built building at all; we can grasp its spatial qualities by studying its plans, sections, and elevations. By analogy with 3-D computer modeling, one could say that we have formulated a 3-D mental model of the building in our minds” - pg. 71-72
tablets, even printouts of web content are visual objects. Film,
Again with mental rotation. Much of architectural experience
video games, advertising, it is all visual. Perhaps even literature
revolves around understanding the visual composition and
Socially, one can argue that the visual has grown faster and faster in developed society. Take the internet - experienced
financially is falling behind visual storytelling through film, relationships of a design or building. This is possible from a human TV, Netflix, and so on. Therefore architecture must develop,
vantage point with a built building, but with design products, the
somehow paradoxically, into a visual profession. That is nearly at observer has to effectively rebuild the model inside their mind. the core of my thesis. “Applied to architecture, this means that there are no visual properties of architectural works that are not ultimately derived from the ideas we associate with these works. Visual perception of buildings is merely a result of the knowledge and beliefs we already have about them.” - pg. 14
It would only accelerate the understanding if the observer could interpret something only a step away from actual experience, an interactive render.
a single image of a piece of architecture, the more the brain will
“In a situation where it is recognized that architectural works can be perceived, imagined, thought about, mentally rotated, and that their geometries can be studied, their colors discussed, and so on, independently of any concepts or meanings we associate with these works, only an ideologically biased professor can insist on evaluating the work exclusively on the basis of the story that can be told about it.” - pg. 85
generalize to the archetype. The brain, when it has to make up
This pertains to the general issue where architects are not
A bit of theory here. The more the brain is forced to draw from its reservoir of constructible memories, when exposed to
information, will just use what it already knows. Thus it is in fact grasping the full breadth of the tools that are available to them. detrimental to the review or design of architecture if people view
The somewhat hesitant reliance of architectural reviews to
it in a reduced manner, that is, in a manner far from the actual
generalize renders to drawings paired with a reliance on printed
experience of architecture. I propose that a greater reliance
material is stifling architectural design flexibility. Thus, in an effort
on interactive visualizations, being that those are closer to said to justify their views (ironic), review boards pretend that they are experience, would promote a truer review of architecture.
in fact not interested in the visual and are looking for (inescapable
32
irony) a more narrative description of the project. The idea
Relevant quotes in textual order:
of ideological bias versus the opportunity to see architecture
“Recent advances in multimedia contents generation and distribution have led to the creation and widespread deployment of more realistic and immersive display technologies. A central theme of these advances is the eagerness of consumers to experience engrossing contents capable of blurring the boundaries between the synthetic contents and reality; they actively seek an engaging feeling of ‘being there,’ usually referred to as presence.” - pg. 29:2
visually is critical to expanding the use of interactive media in architecture. One Approach for Creation of Images and Video for a Multiview Autostereoscopic 3D Display This research report by Emiliyan Petkov outlines a method for creating images for 3D screens, useful to know for my thesis.
In the entertainment industry, displays are getting larger and larger, with more accurate color rendition and higher contrast ratios - this is driven by consumers, so people are buying what they like more and natural selection kills off the TVs in the population
A relevant quote:
set that are not selected. Part of that drive is, naturally, the need
“A matter of interest is exploring the possibility for developing interactive applications for 3D displays. This kind of applications gives users the opportunity to interact with objects in a computer simulated world in real time. Thus the time for remaining in this virtual environment is not limited and decisions what to do and where to go are made by the user. These applications will offer an opportunity for creation of virtual worlds through the multiview autostereoscopic 3D displays.” - pg. 322
to be entertained, but another part of it is that the more powerful
Somewhat tangential, part of my thesis is exploring possible hardware for interaction, one of which would be 3D Displays
the display the more data it can deliver. This can and should be harnessed by architects. “When viewers have the ability to naturally interact with an environment, or are able to affect and be affected by environmental stimuli, they tend to become more immersed and engaged in that environment.” - pg. 29:2
Fig. 2.17 One Approach for Creation of Images and Video for a Multiview Autostereoscopic 3D Display cover.
There is an argument for critical distance - maintaining a
or Monitors or Screens. A strong aspect of that would be, not distance from a design being reviewed so that the design does just review of the design using this hardware, but also creation, not influence the review itself. However, architecture cannot be potentially collaborative. Touchable 3D Video System This research report by Jongeun Cha, Mohamad Eid, and
reduced to a set of images as it often is in design reviews. When a film production team looks at a cut of a film they do so in a dark room - much like the audience would view the film when it comes out. Likewise in architecture, being able to experience a design
Abdulmotaleb El Saddik introduces the idea of presence - the while it is being made like it would be experienced by its users
33
immersive feeling of being inside a virtual environment.
after it is built seems like a useful ability to have.
Fig. 2.18 Touchable 3D Video System cover.
Computer Games and Scientific Visualization This article by Theresa-Marie Rhyne examines the use and impact of video game technology in scientific visualization. Relevant quotes in textual order:
Fig. 2.19 Computer Games and Scientific Visualization cover.
moments in design where data is crucial, but in those moments making the design interactive in real time gains little for the designer. At that point one has to be a little professional on when to use a certain tool and when not to.
“The market dynamics of computer game applications are thus influencing computer architectures historically associated with scientific visualization.” - pg. 42
“Games now represent the leading force in the market for interactive consumer graphics. Not surprisingly, the graphics hardware vendors tend to anticipate the needs of game developers first, expecting scientific visualization requirements to be addressed in the process.” - pg. 43
While scientific visualization does not sound like it relates to
Here is an interesting observation - hardware development
architectural visualization, one can make poignant comparisons. occurs for the lucrative business - video games - first, and the Both are data-driven. Both are group-reviewed. Both develop
data analysis, less popular, business, second, even though
diagrammatic visual products. Both require iterative or prototype the data analysis business should have a closer contact with design stages. Both are model-based, forgoing an exhaustive hardware development as they have more specific requirements translation of the entire product, instead focusing on a simplified for hardware. This is to point out that architecture should still representation. If scientific visualization can learn from video
piggy-back on something else when it comes to visualization and
games, architecture can too.
interaction tools - until, or if ever, it is a powerful business, tools
“Shortcuts in the rendering software to produce a more engaging experience tor the user might work well in a game, but geologists using the same digital terrain data in a visual simulation of fault structures are unlikely to trust what they’re seeing or be able to apply it on a real-life scientific mission.” - pg. 42
A point against interactive visualization - sometimes
will not be made for it. It will have to find them itself. Component-Based Modeling of Complete Buildings This research report by Luc Leblanc, Jocelyn Houle,
simplification of data renders it too unreliable. This works in and Pierre Poulin examines another system for automatically a purely scientific framework. However in architecture, the
generating architecture. While this is not fully near my thesis, it
simplification happens from an impossible ideal - no architectural is important to be aware of what else computer technology is Fig. 2.20 ComponentBased Modeling of Complete Buildings cover.
render has ever become reality. Ever. Thus simplifying from a pretty picture to a less pretty picture but gaining real time interaction works in architecture. At the same time, there are still
capable of that architects have not harnessed yet. The only relevant quote: “Shape grammars constitute the state-of-the-art in procedural
34
modeling of building exteriors, and have produced high-quality results. However, even though modeling building interiors and exteriors appears similar, shape grammars have not yet proven to be a good solution for modeling complete buildings. In fact, since their creation, only a small number of grammars, such as the palladian, have been produced for 2D floor plan generation, and better solutions have been provided by optimization techniques. Moreover, despite 10 years of development, shape grammars have seemingly yet to be used to model complete buildings. ” - pg. 87
One more reason to look to video games for cutting-edge visualization in a field that is almost primarily...visual. Architects can spend all the time they want making window schedules but at the end of the day the product will be something that is seen.
otherwise surfaces, those tools are not being applied to spaces or
“Some features of this engine are realistic glass with reflection and refraction, correct mirrors, per-pixel shadows, colored lights, fogging, and Bézier patches with high tessellation. All of these effects are simple to implement with rudimentary ray tracing techniques” - pg. 45
are otherwise only being applied in a limited manner. Architects
This quote is useful because, on the off chance that I attempt
While tools exist to parametrically generate exteriors, or
spend too long marginalizing their own trailblazers - this report to develop a visualization software, I know that it may not require claims over a decade has been spent on developing procedural a high-end graphics engine with hundreds of shaders and visual shape grammars, yet none of those years yielded a complete tricks - it all can be done with one system. procedural building. Is this an unimportant field in architecture? Perhaps, but why has it been in development for so long, if so? Exploring the Use of Ray Tracing for Future Games This research report by Heiko Friedrich, Johannes Günther, Andreas Dietrich, Michael Scherbaum, Hans-Peter Seidel, and Philipp Slusallek introduces a software technique called ray tracing and applies it to full virtual scene generation, including shadows, reflection, refraction, caustics and other complex effects. The report proposes that computers are now powerful enough that this is possible at realistic hardware scales. Relevant quotes in textual order:
35
“Computer games are the single most important force pushing the development of parallel, faster, and more capable hardware.” - pg. 41
“Because ray tracing computes visibility and simulates lighting on the fly the pre-computed data structures needed for rasterization are unnecessary. Thus dynamic ray tracing would most likely allow for simulation-based games with fully dynamic environments as sketched above, leading to a new level of immersion and game experience.” - pg. 47
Here the technology of ray tracing is advertised on the fact that, since it does not need pre-computation (like having to wait
Fig. 2.21 Exploring the Use of Ray Tracing for Future Games cover.
for a render), it would provide the opportunity for immersive interaction. This makes sense, as the faster the experience is accessed from when it was designed the more responsive the user would be as the conceptual thread in the mind would simply continue from one medium to another. Adding a Fourth Dimension to Three Dimensional Virtual Spaces The only relevant quote (on facing page):
Fig. 2.22 Adding a Fourth Dimension to Three Dimensional Virtual Spaces cover.
“This paper first outlines the capabilities of X3D to show buildings at different times or states. It then examines how temporal data can be stored within XML and combined with model data in the form of X3D. This data is then extracted and filtered on the client computer through the use of XML technologies. The way in which buildings can be displayed at different times or states along with associated descriptive text is demonstrated.” - pg. 164
make tools that are highly specific to one purpose or, worse, one project.
encoding time data into a model on the pseudocode level. That
“Until today only “monolithic” geovisualization systems can cope with all these challenges of providing high-quality, interactive 3D visualization of massive 3D city models, but still have a number of limitations. Such systems typically consist of a workstation that is equipped with large storage and processing capabilities, as well as specialized rendering hardware and software, and is controlled by an expert who controls the virtual camera and decides which information to integrate into the visualization through a graphical user interface.” - pg. 1
is, it is not fundamentally difficult to store temporal versions of
Generally, tools need to be general. A hammer that works on
The general gist of this research report, by Robina E. Hetherington and John P. Scott, is the apparent simplicity of
a design within the files of the design. This is significant because, only one type of nail is not a very good hammer. A rendering setup again, it is so simple for architects to use these tools, or to develop that only works during day scenes is not very useful in the large them, that it boggles the mind that they have not used them yet, scheme of things. Likewise, a system for interactively visualizing or frown on their use. The ability to encode time data within the design, separate from animation, could show clients, or a review
designs should remain flexible so that all architects can use it.
board, what the design would appear like during different times
“these systems mostly lack the emotional factor that is immanent to today’s presentation and interaction devices such as smartphones and tablets” - pg. 1
of the year, which sounds like a powerful tool.
This is an aspect I have strangely ignored - the emotional
Service-Oriented Interactive 3D Visualization of Massive 3D City Models on Thin Clients This research report, by Dieter Hildebrandt, Jan Klimke,
factor of being immersed in a design. There is zero emotion, except despair, in an architectural review. Let the building speak for itself, let it inspire, motivate, drive the review. Such are the fruits of an interactive visualization system.
Benjamin Hagedorn, and Jürgen Döllner, points out how cumbersome specialized hardware and software can become. In a Fig. 2.23 ServiceOriented Interactive 3D Visualization of Massive 3D City Models on Thin Clients cover.
system designed to visualize massive models of cities, specialized hardware was developed with specialized software and an expert was trained to operate all of that...just to make a moving picture of a city. This is a point against the tendency with architects to
36
InTeRVIews
anD
ReVIews
On September 18th, I met with Thomas Cortina, Associate Teaching Professor in Computer Science at the Gates-Hillman Center. Below are important points from the meeting: • Thomas mentioned a number of names I could pursue for further inquiry: Jessica Hodgins, Kayvon Fatahalian (with whom I eventually had an interview), both of whom work in computer graphics, Alexey Efros, who is at Berkeley and works with computational photography, and Guy Blelloch, who was the lead on the design committee on the client side for the Gates Center while it was being built. Some of these ended up being unreachable. • He also mentioned several libraries that I could look into (and eventually did): the ACM (Association for Computing Machinery) and SIGGRAPH, both of which could have articles and research on graphics related to architecture. Fig. 2.22 The College of Fine Arts compared to the Gates-Hillman Center at CMU. Both reflect the style of their age: the College of Fine Arts is rigid, uniform, and measured, while the Gates-Hillman Center is open, dynamic, and constantly adapting.
• Yet a third line of inquiry he mentioned were the research branches of large tech giants such as Microsoft, Google, IBM, and Pixar, which often publish reports on cutting edge research and technology.
37
All of these paths helped me develop my literary research.
Fig. 2.25 Thomas Cortina.
On October 8th, I met with Kayvon Fatahalian, Assistant Professor of Computer Science in the Smith Hall. Below are important points from the meeting:
Fig. 2.26 Kayvon Fatahalian.
• Simple lighting can be done up to any arbitrary geometric complexity, but baking complex shadows becomes tricky, and is the area where graphics systems start taking shortcuts. • One aspect of thesis is making this statement: “I believe it is possible...” Where are the situations where existing tools do not meet the needs of architects; what is not good enough? • If I asked about what architects want, the deliverable
Fig. 2.27 Near-exhaustive computation brought up during the interview.
would be a proposed solution. Conducting a survey of the efficacy of visualization software in the field would be fruitful. • With an interactive render versus a static one, there is an aesthetic trade-off - the first looks worse, the second looks very good. What particular things do architects want to do? • The idea of how pre-rendered videos can account for every possible virtual scenario. That, or a mix of pre-rendered and real time. How does that apply to architecture? The biggest points I got from this meeting was to ask myself how would an architect approach such software and what would they need of it. This allowed me to move forward with software analysis.
38
During the first poster session, on September 18th, I got feedback from various professors in the School of Architecture as well as my advisors and other students. Below are points from that feedback: • A feasibility analysis would be useful, in the form of a flowchart with yes/no pathways that would narrow down the nature of the thesis. This idea I later incorporated into both the Mind Map and the software flowchart. • The architectural design process was suggested to be important to keep in mind. The problem had to be framed both from the point of view of the client (what does the client want to see?) and from the architect (what does the architect want to show?). From the first poster session I got ideas on what my midreview should include to explain and ground my thesis.
39
Fig. 2.28 Poster #1 shown at the first poster session.
The midreview, on October 21st, was when the greater ideas of how I was presenting my thesis came into play. The plot’s color scheme was designed as if one were staring at the world with one’s eyes closed. There were also brochures and my website available for perusal, which made its official debut on that day. The midreview had the following feedback: •What is the dimensionality of inquiry? What is too interactive? What is not visual enough? Where is this on a scale of realism to representation to abstraction? This pushes the nature of belief. •Every tool changes the field. Speculate on what this will kill. Find how it will negatively impact architectural practice. •In 1994 renders were made with 600 kHz processors that
Fig. 2.29 The midreview plot.
mimicked hand drawings. At some further point, firms began experimenting with realistic renderings, with no technical expertise. •Is technology pushed just so it can wow someone? Anything with technology or design has this eventuality, but is that the point? •There is a caveat - that I am not a technical designer. •Lastly, comments were made to the effect of “this is a thesis. Where is your project?”
Fig. 2.30 The midreview brochure, showing both the outside and the inside.
40
The second poster session, on October 25th, was the same week as the midreview so it featured little development from the work at the midreview. It was more of a ‘coming attractions’ setup. As such I had a projector with a video setup in front of my poster showing a glimpse of things to come. The feedback from the first semester midreview and the second poster session, due to its positivity, allowed me to continue in full swing with the software evaluations. However, I knew that, for many, getting a basic understanding of my thesis was important, and I had to focus on that as well.
Fig. 2.31 Highlights from the second poster session. The top right image shows the setup with projector.
41
Fig. 2.32 QR code for gif animations of the poster.
The first semester final review was on December 8th. The review panel provided a number of new and interesting perspectives that I can use to move forward with my thesis: •I need to address how architects will use this, especially with BIM and delivering construction documents. My assumptions are far above the set of common assumptions of architects. I need to bridge this gap. I need to look again at other firms doing this consider why animation is paid for, not in-house. •With video games, there are other aspects than the visuals that can benefit architects, like pathing, AI simulation, etc. •When are beautiful sketches used compared to the GRID?
Fig. 2.33 The full final review plot, not including the projected video.
Is it detrimental to show this to a client, since they won’t use their imagination anymore? Different audiences will use it differently. •Different levels of information can be shown - maybe abstraction is a tool architects want: the GRID can still have motion, but does not have to be photorealistic. •Video games and films are made to be mass produced, very unlike architecture, and consider the social aspects.
Fig. 2.34 The projector and speaker setup in front of the final review plot. The projector was used to project a moving graphic and a video.
•This exists, so what is the question? Will it eventually become mainstream? Address the trend, and why accelerate it. •What is a tutorial? Develop demonstrations, show not why, but how - prove by example.
42
sofTwaRe ReseaRCH The second half of the semester focused on engaging software research with the literary research I did during the first half of the semester. That involved an extensive analysis of various software packages. These software packages are outlined in the following pages. The analysis will follow the same thorough path outlined on the facing page. The main purpose of this part of my thesis is, within the general context that my literary research created, to find a place for visualization software in architectural practice. This is a twopronged development: the first prong is to actually find a capable software package that can perform baseline photo-realistic rendering and is flexible enough for a variety of applications. The second prong is to approach the problem from the side of architects: if one of these software is capable of these basic tasks, what advanced, architecture-specific, techniques should they be able to do? For example, should this software be able to simulate people mingling in a project? Water collecting on roofs after a heavy rain? Structural fatigue?
43
Fig. 2.35 Software research path.
44
The software I reviewed were Octane as a plugin for Rhinoceros, Vray RT, which is part of Vray, Arauna2, a separate program, UDK and CryEngine, which are video game software suites, Blender Cycles and LuxRender, both experimental and the first built into Blender, Unity3D, a video game development suite, and Lumion, which was made specifically for architectural visualizations. On the left are comparisons for each software in each category on a scale of 0 to 10, subjectively. I thoroughly analyzed each software for its pros, useful features and benefits, its cons, where the software was hard to use or had drawbacks, its software context, how it related to a default installation of Rhinoceros and Vray, its rendering features, what kind of rendering effects it could do, its rendering drawbacks, what kind of shortcuts did it take to achieve real time rendering, and its delay load, how much more time it would take to work with this software compared to a render in Vray. After considering everything, I found that none of the software achieved high points in all categories. The choices I think I have are Arauna2, Octane, CryEngine, and Lumion. Ultimately it will be either Octane, given an interactive walking script is made for Rhinoceros, or CryEngine, if I can streamline its import process. Arauna2 would be nice, but it is still in development. Lumion is almost there, but has too many interactive drawbacks
45
Fig. 2.36 Summary of the software evaluations.
and does not appear to support scripts.
Octane sets up very quickly once loaded in Rhinoceros. The default values are very good for an average Rhinoceros model. The controls and materials are easy to define. It also can sync with Rhinoceros’ camera. While it includes its own sunlight and sky system, like Vray, it is built in, and needs to be reconfigured if Fig. 2.37 Octane render logo.
the scene already has a sun light. There are a lot of options - but not all of them have much visible effect. It is GPU based, so other programs are not heavily affected. The biggest drawback is the renderer itself - path tracing appears very fuzzy until the camera stops, after which the view resolves within seconds. The camera can be set to only show the view after a few samples have been calculated. The rendering quality is fixed, so if it is slow then it will always be slow. Scene complexity does affect it somewhat. Also, the viewport needs to be updated when new geometry is created. Other cons are that lights have to be set up as emitter surfaces and it does not appear to use bump maps to simulate detail. Otherwise, it can do all material types, depth of field, and has advanced camera controls: exposure, ISO, gamma, saturation, etc., and it can be networked.
Fig. 2.38 Snapshots of Octane’s controls and render viewport in Rhinoceros. Clockwise from top left: Basic scene featuring sunlight and sky modeling, depth of field, and reflections; Another example of an imported scene, featuring materials; Complex scene rendered rapidly; Scene with millions of triangles with minimal mesh conversion into Octane; Comparison with Vray RT, with similar materials; Comparison with the regular Vray using the sample scene I created, and lighting matched as closely as possible.
The delay load is marginal. Time might be spent on setting up materials, converting lights to emissive surfaces or trying to find features of Vray that are not present in Octane - such as the different renderers, animation controls, camera types, etc.
46
The workflow in Lumion, which is separate from Rhinoceros, is rapid and configurable. It definitely seems to come from a video game background, as it has easy quality controls (compared to CAD software, where preview controls are hard to access). The import process is fast and intuitive, with a large library of models of people, trees, and objects. It features terrain sculpting and water bodies, with an ocean that has configurable waves. However, the full version is not free. The biggest drawback is that the aim is for pre-rendered videos and images only. There is no walking mode and the camera is a standard flying camera, though it can switch to orbit via a button press. Below the ‘high quality’ setting, the rendering looks very cheap. There are only a few fixed cloud arrangements, though that is understandable given the task of photographing a variety of clouds. There is a compromising feature though the clouds can be adjusted in density (which seems to have no Fig. 2.39 Snapshots of Lumion’s controls and viewport. The menus are all flyout, meaning that once a scene is loaded it takes up the whole screen except when a menu is opened. Clockwise from top left: The sample scene with approximate shaders, note how water was used to approximate refraction; the same scene with materials and higher quality shadows, this was a performance hit on my laptop; The scene with the packaged elements included - a tree and a man, both affected by light and animated, though the man walks in place.
effect on the sun, and the clouds do not cast shadows). The water customization is nice, but it is fairly fixed in style. Otherwise, models can have any materials, but refraction is by normal map only. Since it imports .obj files well, UV mapping can be done in Rhinoceros. Using it takes only several minutes. A scene can be set up
47
with the library of objects quickly, and the camera and UI controls are fairly intuitive.
Fig. 2.40 Lumion logo.
Arauna2 is a new experimental rendered that recently revealed an evaluation version. So far it has many features: full Fig. 2.41 Arauna2 logo.
material support, including refractive, reflective, and specular, lights support, built-in post processing and full screen filtering, and a fixed sun model. It has a very easy to use UI, though the camera controls are somewhat unintuitive. Another useful feature it has that is rare to see are various extra rendering modes, such as normals, depth, pure GI, rendering cost, and others. Aside from the lack of a walking camera, the only drawback is that it is still in development - there is no way to test how well it imports models, or if it will have any more advanced features. The camera does not collide with anything, but one can assume there will be some way to use model collision. The evaluation version uses a Unity scene as data, but that may be temporary. It is also unknown if it will even be released as a separate program for visualization - perhaps it will only be licensed for video game developers. It does use path tracing, which is as always grainy during motion. One minor point is that lights had hard shadows.
Fig. 2.42 Snapshots of Arauna2’s controls and viewport. The menus are all overlays, meaning that once a scene is loaded it takes up the whole screen except where a menu is, and everything can be hidden via a button. Clockwise from top left: Pure GI shading with depth focus in the back; Path tracing with focus in the front showing light effects and simple specular; Example of full scene reflection, which had no impact on performance; Example of refraction, some caustics, and customizable light.
The delay load is unknown, but most likely marginal to fractions of an hour, depending on the import process. This renderer is very promising.
4
Vray RT is the narrowest transition from regular Vray use, though it lacks many features that the other renders have. Its main draw it that, simply, it is a different button to press to do a Vray render. It appears to be a reduced renderer, and does not approach Vray’s usual quality, thus seeming to be only for preview purposes. Otherwise, ray-traced shadows and materials are rendered accurately. The sun and lights are still processed properly. The camera can be synced to Rhinoceros’ camera and does not feature any other camera controls, like walking. However, compared to more focused efforts like Octane or Arauna2, it is grainy and resolves fairly slowly. The delay load is minimal. It is only a different button away from a regular Vray render. If nothing else can be done or used, it is an available alternative.
Fig. 2.43 Snapshots of Vray RT’s viewport in Rhinoceros. Top to bottom: The render viewport by itself; The renderer, left, compared to Octane.
49
Fig. 2.44 Vray logo.
UDK (Unreal Development Kit) is a free software package specifically made to develop video games. It is a large download (1.9 GB) that features an extensive library of models and other elements that can populate a scene and several rapid template setups with preset sky and sun arrangements. Fig. 2.45 UDK logo.
The import process must use Blender to convert .obj files to a file format for UDK, .ase. Then, shadowmaps need to be baked fairly quickly fast, but must be done again after any change. Materials have to be set within UDK and are limited to simple shaders. Sky and sun can be changed, and UDK has various types of lights. Collision is a matter of a toggle. The biggest drawback is that mesh import glitches at 65535 triangles, limiting the detail of complex models, requiring them to be split into several chunks. It also takes around five minutes to start. Many features in UDK are totally unnecessary for the visualization itself. The sun light does not interact with the atmosphere, requiring manual adjustment. Lastly, UDK uses vertex lighting, causing shadows to appear off or inaccurate. Otherwise, UDK has interactive walking. The camera bobs to
Fig. 2.46 Snapshots of UDK’s controls and viewport. The viewport functions just like the one in Rhinoceros, where wireframe orthographic views can be set up. Clockwise from top left: The raw scene import with basic shadows calculated; The same scene with materials applied from the included library; The content browser, which shows the materials, objects, and other elements that come with the software.
the motion of moving legs and there is a slight motion blur. The delay load can be fractions of an hour, depending on any issues with the import and basic materials exist or can be found.
50
CryEngine is another software suite for making video games. Even though it is newer than UDK it runs fairly smoothly (~20 fps) on low-end systems. It also comes with a large library of models that can populate a scene, like trees and rocks. The huge drawback I experienced was that the export process is long and arduous and requires either Blender (unofficially) or 3DSMax (or Maya). The export process requires significant set up in Blender. 3DSMax export is faster, except material definition is faulty. In both very specific steps need to be taken, with nearly any step prone to glitches, and a slip anywhere may mean improperly assigned materials or a lack of collision. However, once the meshes are imported it is fairly simple to set up a scene, especially with a template file. All material effects can be simulated with shaders, the sky and sun are realistically modeled and an ocean or bodies of water can be made. There Fig. 2.47 Snapshots of CryEngine’s controls and viewport. The viewport only shows a 3D view of the scene, concordant with WYSIWYG. Clockwise from top left: The sample scene imported without any materials, featuring real time shadowing, sun, and sky; The same scene being tested, with materials, a shadow from the viewer, and the sky altered due to a lower sun angle; A view of the 3DSMax import pipeline, where materials are assigned.
is interactive walking just like in UDK, with the addition of the walker’s shadow. The shadowmaps are entirely dynamically generated and approximate GI. Lighting is simple but effective. It may take multiples of an hour to bring a scene into CryEngine from Rhinoceros. Even with practice there is a lot of preparation that has to happen and not all of it is intuitive.
51
Fig. 2.48 CryEngine logo.
Blender comes with an experimental path tracing renderer Fig. 2.49 Blender’s logo. Cycles does not have a logo.
called Cycles. It has very few controls, which replace Blender’s default controls once it is activated, thus there is less to learn of the actual renderer once one has knowledge of how Blender works. The path tracing rendering is very fast - the scene resolves to an acceptable quality within seconds if the camera is still. Also, since Blender is free Cycles is free as well. This also means there is a large DIY community of graphic modelers and designers. Cycles supports Blender’s light objects and material definitions, with many presets including reflective, refractive, cartoon, and others. While moving the view is pixelated but is not choppy, which is a better solution than that used in Octane. The biggest drawback is that it requires some knowledge of Blender, which has a steep learning curve. If geometry is imported from an .obj file, materials have to be reassigned. Blender’s sun, as it is handled by Cycles, does not have sunlight modeling - it is just a distant light at a given angle, though a modeled sky can be set up. Blender does not easily support walking.
Fig. 2.50 Snapshots of Blender’s controls and render viewport. The viewports in Blender can be variously configured. Clockwise from top left: The sample scene with soft shadows and full materials; The same scene with harder shadows; The scene as it resolves with one sampling, showing the graininess it begins with.
The delay load is fractions of an hour or more - added to how much time it would take to learn Blender, setting up a project here compared to Vray takes more effort, including changing mouse controls, changing how objects are placed and moved, and more.
52
Unity is similar to both the game suites and to Blender in that it is designed to make games but has its own modeling tools. Its UI is relatively straightforward. The free version has many features, enough to do basic visualizations. While it can import .obj directly, Blender may be required for additional material or UV setup. A big drawback is that many advanced features present in the other software are not included in the free version and the features that are present are fairly weak in quality. The sun and sky need to be faked to achieve various daytime lighting situations and the shadows seem to be fairly low quality and need to be calculated, a process that takes several minutes. Otherwise it has material shaders, some kind of real time shadows and supports light objects. Walking is supported after some setup. Mesh collision can be easily set. The delay load is fairly small - fractions of an hour - any extra setup in Blender and importing assets into Unity take time, although template scenes may be possible.
Fig. 2.51 Snapshots of Unity’s controls and render viewport. The viewport switches to game mode when the walking is activated. Top to bottom: The sample scene with some basic materials showing dynamic shadows; Precomputed shadows, but at a low quality.
53
Fig. 2.52 Unity logo.
LuxRender is fairly fast and comes with a material library, but it does not provide any interactivity. It is a step backwards, using new software rendering but not using it advantageously. It is a plugin renderer for Blender and works on the same level as Fig. 2.53 LuxRender logo.
Cycles. It likewise changes various settings and generates a new viewport when the render is started. It renders a frame at a time, like Vray and due to the new viewport it is difficult to move back and forth between the design window and the render window. Its delay load can get to fractions of an hour. Material settings and assignments are nothing like those of Vray and are somewhat clunky, on top of learning the workflow of Blender.
Fig. 2.54 Snapshot of LuxRender’s controls and render viewport. There are more controls in Blender’s menus. This is highly similar to Vray’s viewport.
54
HaRDwaRe ReseaRCH On November 18th I received a new graphics card I purchased a few days earlier. This card was a Nvidia GeForce GTX 660 Ti, replacing an ATI HD 5770, and the reason was purely because it had hardware that enabled the use of or the faster application of Fig. 2.55 The new card, left, versus the old card, right.
several of the software packages I looked into. Nvidia graphics processors (GPUs) have a technology called CUDA that uses parallel processing to do graphics tasks. The software that uses this technology, Octane, Arauna2, and other path tracers, would not otherwise work with the ATI card that I had before. I was able to use Octane at reduced settings on my laptop, as it had an Nvidia card albeit one of lesser quality, but the others would not work with that card because it was too old. The laptop card was a GeForce 130M with compute capability (a property of CUDA technology) of 1.1, whereas the 660 Ti, by
Fig. 2.56 The new card inside the desktop tower.
comparison, has one of 3.0. The laptop card also has only 32 CUDA cores, whereas the new desktop card has 1344. Also, for the video game engines, the new card is roughly 50% stronger than the ATI card I had before, so I can push those engines further to achieve higher quality visualizations.
55
Buying the new card (a $259.99 value) was the best option for my thesis in terms of hardware because it was readily available,
Fig. 2.57 The old ATI card.
enabled the use of software for my thesis, and demonstrated that my thesis can exist without expensive or cutting edge hardware like virtual reality headsets, new means of interaction like the Fig. 2.58 The Oculus Rift virtual reality headset in action. This is an example of unattainable hardware.
hardware Adobe is developing, or immersive room-sized display setups. The initial limitation of the low-end hardware on my laptop and the unusable hardware on my desktop still played an important part in my thesis because it showed that this software could be used on existing, potentially old, hardware, though with
Fig. 2.59 Adobe Mighty and Napoleon. Mighty is the triangular pen, Napoleon is the ruler.
severe drawbacks and shortcuts.
Fig. 2.60 Unboxing the new card. It came with an instruction manual, a drivers disc, and extra cables. The card was distributed by ASUS, which also added the cooling system.
56
DelIVeRables aPPlICaTIons
57
Fig. 3.1 Serious Editor 3.5 by Croteam. This kind of software is used by video game developers to create virtual worlds - much like architects do with CAD software, except with materiality and lighting as part of the toolset.
Fig. 3.2 CryEngine Sandbox by Crytek. This is a much more recent video game engine and favors dynamic shadow generation over the use of pre-computed shadowmaps.
Fig. 3.3 Unreal 4 by Epic Games. This is a future engine currently in development that, while it still uses shaders, simple lighting, and other standard methods, pushes them to their limits to achieve photorealism.
Fig. 3.4 Luminous Engine by Konami. This is also a future engine currently in development. Engines like this are at the forefront of video game engine technology, pushing what is possible with shaders and graphics software.
Fig. 3.5 Help files and documentation for various graphics software. Clockwise from top left: Unity; Blender; UDK; Rhinoceros. These range in quality and depth, with some featuring text and image descriptions and others even including video. Unity was the only one that read from an included file, the others either embedded or opened a browser page to an online database.
58
Fig. 3.6 Fallingwater in Half-Life 2 by Kasperg. This is a demonstration of modeling a real building in a video game environment.
Fig. 3.7 House in UDK by Luigi Russo. This student project, modeled in video game software, showed that the same goals that students use CAD software for can be applied to video game engines.
Fig. 3.8 City scenes in Brigade 3 by Otoy. This is the cutting edge of cutting edge path tracers.
59
Fig. 3.9 Fox Engine by Konami. One of the images in each set is the engine, the other is a comparative real life photograph. Which images are the engine?
Fig. 3.10 Euclideon Engine. This uses a method I did not explore - voxels as it is more about generating geometry rather than photorealism.
Fig. 3.11 Path tracing method, sample images. This shows an exhaustively detailed physical environment rendered with full lighting and materiality at interactive speeds. On the right, water effects are also simulated.
Fig. 3.12 Las Vegas Bellagio Comparison in CryEngine by IMAGTP. This is a photo-realistic demonstration of a real building compared to a photograph taken at the same location.
60
moVInG foRwaRD sofTwaRe PaCkaGe Depending on which software I move forward with, the next steps of my project will be either lightweight coding or heavyweight streamlining or coding. The Octane approach assumes that Octane is set up within Rhinoceros and the only thing missing is an interactive control. The range and nature of this control will vary, as simple horizontal camera control by forward impetus and turning is much simpler than also having camera bob, gravity, or collision detection. The CryEngine approach assumes that it is installed and a Fig. 3.13 The Octane approach, where only an interactive script needs to be made.
Rhinoceros project is available and the only thing in the way is the cumbersome and complex import process. The range and nature of streamlining this process will vary from simply documenting comprehensively and cleanly how to do it with the least mistakes, to attempting to enhance the plug-ins already existing to attempt to automate the process further. Once one of the above is in place, the next steps are more or less identical. After both real time rendering and interaction are achieved I need to document further features such as material assignment, any way for collaboration or portability, streamlining
61
Fig. 3.14 The CryEngine approach, where the import path needs to be streamlined.
controls, general use principles or shortcuts, and the like.
After that, I would attempt to compile a software package. With Octane, that would involve everything but the software itself, as it is not free and would need to be purchased. Otherwise, there would be a zip file, or even a small self-installer that will consist of plug-ins, help documents, videos, and so on. With CryEngine, the package will be more robust, as, at least theoretically, it may include all of CryEngine, which is hefty at 1.9 GB. Since there may be licensing issues I may also require that it be downloaded separately, but as it is free that is less of an issue. The software package, by its very existence, would be the proof of concept for my thesis. However, at this time it is of a very vague nature since there are too many variables in how I would approach developing it. Workload-wise, developing the help files and tutorials alone is a lot of documentation, and if I choose to do some sort of scripting I would need time to familiarize myself with the scripting languages that I would need to use. Also, knowing my audience will be very important. As
Fig. 3.15 The breakdown of the software package.
different users will use it differently, I will need to frame it as such. For an architect looking to use it as a design tool, to rapidly view a project interactively with progressive visuals, it will be one thing and have a certain feature set. For a client wishing to explore a realistic simulation of a project, it will be another thing. For a contractor wishing to see the assembly of certain elements it will be yet something else.
62
moVInG foRwaRD benefITs anD DeaTH The software package foresees potentially great benefits in the field of architecture. To understand where these benefits come from, it is useful to review in a nutshell. THE GRID is a tool meant to preclude the physicality of an architectural design via software and hardware that is currently available and immerse, by visual and other means, the client or architect in the design before it is built. Since architecture is experienced through both time and space, it is necessary that such a tool exist during some early stage of design before the design is finalized and converted into construction documents. That conversion is generally done with BIM, as BIM is accurate and collaborative. Once the BIM phase starts there need for the GRID lessens, as it can be assumed the design will not change dramatically at that point. Using construction documents, the architect and contractor collaborate to produce a built space, too late to make many changes. In the current design process, the real space and the real time are only reachable in snapshots or animations generated beforehand (animations could be understood as simply a series of snapshots). The problem with this is that, due to the way one experiences a still image versus a physical space, there will always
63
be experiential differences between what the design was before it was built and what the design becomes after the construction.
Benefit #1 - Reducing the Gap: The first benefit of the tool is that the experiential gap is narrowed or even removed. With the ability to see a building through the medium of a computer screen with realistic shadows, movement, light, and materiality, both the client and the architect are brought to the same level. The client probably has little experience working with CAD models and renders, or animations, and lacks the preparation that the CAD model, and working with that model, gives the architect. What the client lacks is an understanding of the space. This can be done by teaching the client how to understand
Fig. 3.16 The experiential gap between a still image and the physical space.
architectural orthographics, which is arduous and exacerbates the problem (by reducing experience rather than expanding it) or the client can have what they already understand, a visual substitute for the real thing. Sketches work, but ultimately the building will be something real, and this reality has to somehow manifest early on. Benefit #2 - Personalization: This allows the client to make the project their own. By using computer interfaces they can inhabit and explore the project. The power of the simulation is that it uses the user’s own brain to their advantage by letting it translate the motion within the virtual world to motion of their physical self. The project becomes familiar and understandable. This assumes that the client was not already swayed with beautiful sketches, or other abstract representations of the
Fig. 3.17 THE GRID allows a viewer to experientially inhabit the project.
64
project. Where these other representations used imagination to create the space, the GRID will use it to explore the space. Benefit #3 - Prototyping: Even before the client uses this tool, the architect her- or himself can use it to rapidly prototype the experience rather than the assembly or the totality alone. Architecture is far to big to be prototyped in full, and prototyping little chunks only goes so far. One can approach this by breaking down what architecture is - the memory of time and space. Memory is the passage of experience, time is a series of moments, and space is moment Fig. 3.18 Architecture is composed of the memory of time and space. Space can be prototyped with models, time can be prototyped by simply being around and examining the models, but memory is harder because it involves tricking the brain.
given shape. Space is the easiest to prototype because an architect can build a scale model - this will provide a sense of the space. Time is also easy to prototype, because the architect need only to hold the model for a while. Memory is a little harder because the brain is smart - it knows the model is just a small object. The architect needs to trick her or his brain, to get down near the model and pretend it was big and thus come close to a memory. THE GRID does that and goes further. It also has the architect make a model, and spend time with it, and get down close to it. But it goes beyond - the architect can walk inside the model, the architect can change the lights and the time of day, the architect
65
can flood a room with water or place other people in the space. True creativity can flower then, when memory is achieved.
++++++++++ The thesis would also be malicious. As a tool it would contend with orthographics, renders, and physical models. Before construction documents and after things that can be called sketches these tools have come to be standard in the design pipeline. Death #1 - Orthographics: With orthographics, should the client be enamored with the GRID, they may not be interested in plans or sections, even though those tools still provide valuable insight into the spacial organization of a project and the interaction of the systems within or between the spaces. Likewise, if an architect is using an axonometric diagram to explain the order in a project but the client does not see that order in the GRID, the client may put less faith in the work the architect put
Fig. 3.19 A client may not care about orthographics if the GRID is compelling.
into the diagrams, demanding instead, perhaps unrealistically, that the diagrams match the experience found on the GRID. In an in-firm review the orthographics may be quickly cast aside as experiential conversations are brought up only visible on the GRID, raising questions as to why the architect spent time on the orthographics over working on the GRID. Death #2 - Renders: With renders, the overlap is sharper. Given a regular pretty render and the GRID, the client may wonder why the architect bothered to take one picture when the GRID allows them to move around and take any and all the pictures
Fig. 3.20 Two architects, one waiting on a traditional render, the other already being group reviewed.
66
that they want, from any attainable angle. Back at the firm, the architect is spending many hours working on a few renders while another architect, working on the same project, in the same time finalized the GRID, rapidly creating countless renders and videos of the same project, all at an even high quality. Death #3 - Physical Models: Even physical models may feel the heat - much as with a render, one architect spends the whole night crafting a model while another has created the GRID, with full materiality, realistic sun shading, water bodies, and more. The only difference is, the physical model is twirled in the hands while the GRID is controlled by a keyboard and mouse. Even assuming advanced hardware exists, the physical model is 3D printed with full geometric detail...and the GRID architect uses an Oculus Rift Fig. 3.21 Prototyping small pieces of architecture with physical models takes time and does not give an accurate rendition of the built end result. With a digital substitute, the entire project can be prototyped and reviewed.
to create a virtual 3D display that delivers a near-real experience, complete with depth information. Sketching too may be impacted, though not killed - imagine a precedent study being not just looking at photos and drawings but exploring the GRID version of that building, perhaps modeled with LiDAR, and documenting the experience. Perhaps one step of design is quickly molding spaces on the GRID and experiencing them for inspiration. Construction documents can be reinforced by the GRID. An
67
architect can show a polished GRID on the construction site to the team, showing how the project would look like, materials,
shading, landscape elements and all, as one moved from one end of the building to the other. Since the GRID is intuitively understood, the contractor would not need to learn a new means of communication with the architect. Also during construction, the architect, perhaps if she or he now sends a floor plan to the tenements of a future apartment building so that they can mock up their furniture arrangements, can send the tenements the GRID, which they can use to explore
Fig. 3.22 A client customizing a house using a real time visualization to get the exact appearance they want.
and make their choices in a medium they can understand. No more need for the billboard proclaiming a future building - just go on the website of the firm and download that building’s GRID.
68
moVInG foRwaRD ImaGInaTIon anD exPeRIenCe The thesis needs to find its audience, for its audience does not know the show is on. There are certain assumptions inherent in the GRID that are so far away from the common set of assumptions of architects and their connected fields that I attempted to break down, and I isolated a few and tried to address them, but many remain. One aspect that I overshadowed was the reality that the software implementation that my thesis is exploring is already present in some arenas - some firms have used this as a design tool and delivered it to clients as such. These unsung firms, however, do not themselves see the benefits of spreading this knowledge to the rest of the field. Perhaps because this is because they feel entitled to uniqueness, perhaps they do not see results or believe this delivery is more work than it is worth. Perhaps every person they use it with has desired different things from it. Different audiences will indeed react differently to the GRID. Well-entrenched firms will not allow for yet another software into their pipeline, while more open firms will see it as a design tool, perhaps devaluing the photorealism for the interaction and
69
layered data sets.
The data sets firms may choose could differ from the general one I focused on - that of photorealistic interaction by walking in real time. Some firms, or even the client and maybe the contractor may desire to explore the project while only focusing on the hierarchy of spaces, or perhaps while emphasizing the structure behind the walls. Their imagination could then be guided depending on the type of communication. The imagination of the recipient, be it client, contractor, or fellow architect, would nevertheless still be engaged. While abstract sketches or diagrams can communicate, nothing yet gives the user the element of choice, the choice of experience, over memory. The choice of what can be done, over what has been done. Would this lead to a death of the outdated cultural belief that architectural products are drawings, and instead herald an age where people see architects embracing the digital? What if an architect wanted to do something other than what his profession had intended for him or her? What if an architect dreamed of something more, some means of taking their understanding and making it the understanding of others? THE GRID will give architects an ideal to strive towards. They will still render, still make animations, still rely on CAD. But in time, they will learn to use it, to make it shine as the sun. In time, it will help them accomplish wonders.
70
73
Fig. 4.1 Formation of THE GRID logo.
InteraCtIon PossIbIlIty Part two - table
of
and
Contents
Table of Contents
74
Introduction
75
Unresolved Issues
75
Initial Course of Action
77
Underlying Assumptions and Argument
81
The Tutorials
85
Overview of the Visualization Tutorials
85
Tutorials 1 - 8: Visualization
86
Overview of the Interaction Tutorials
114
Tutorials 9 - 12: Interaction
115
The Templates
125
Being on THE GRID
127
Difficulties of The Tool
127
Spring 2013 Project
129
The Design Challenges
131
User Interactions
133
74
IntrodUCtIon Unresolved IssUes THE GRIU as it stood at the end of the first semester had three significant unresolved issues. These issues were as follows: 1. Indecision Between Octane or CryEngine This is a major issue because one approach, Octane, is strongly photorealistic but lacks significant interaction. The other approach, CryEngine, is strongly interactive but is worse at rendering as it uses shaders and simplified lighting. Thus I could not use one without a sacrifice of what the other may provide. The option of using both existed but then THE GRID could have become cumbersome with too much new software to learn. Resolving this streamlined my thesis. 2. How to Demo With a Lower-End Hardware Laptop The laptop I had at the time was fairly old and, while it could run both approaches fairly well, did not represent a modern piece of hardware or the quality of hardware than an architecture office would have. Buying a brand new
75
laptop was not an option due to cost, but if there was a way to either rent a laptop or simulate better hardware (perhaps
using my stronger desktop as proxy), then I would have been able to demo more effectively. Resolving this was not fully necessary yet would have been a great benefit. 3. Determining the Feasibility of Redistribution Since both approaches involve a whole separate piece of software my training materials would have required the user to install one or the other piece of software. Octane has a hefty price while CryEngine is free but is nearly 2 GB
Fig. 4.2 CryEngine intro splash.
in size. One or the other would have needed administrative privileges on the target machine, but if the assumption was made that the software were acquired separately then the training materials could have remained valid. Since everything already depended on Rhinoceros and/or Vray being present on the target machine, the point may have been moot on a technicality. However, on top of that there was the issue of burning to a DVD or preparing an online archive, both options that I figured out later. Resolving this enhanced the practicality of my thesis.
76
InItIal CoUrse
of
aCtIon
The thesis, going into the second semester, can once again be summarized in a one-sentence, three-sentence, nine-sentence format. Note that this was the initial course of the thesis for the second semester but as the work went on the course was expanded to include student testing and Fig. 4.3 The second semester focused on the space between THE GRID lines of the first semester: GRID cells instead of GRID lines.
interaction. 1: To Educate and Advance the Field of Architecture Using Existing Photorealistic Real Time Interactive Design and Video Game Software. 3: The education would come in the form of complex and user-oriented tutorials and instructions. Even though the veracity of THE GRID was proved last semester, a more concise summary of its importance would accompany the training materials. Two approaches, one using a piece of design software and the other using a piece of video game software, would be the focus of the training materials and serve to advance the field. 9: THE GRID is about maximizing tools already available or on the cutting edge, as very few instruction
77
manuals, or any, exist for the tools in question. These tools
focus on photorealistic real time interactive visualization of digital models, assuming that there will always be a digital model to work with in a modern project, as otherwise the project will get nowhere. Of two approaches chosen last semester, one uses a path tracing plugin for Rhinoceros that is orders of magnitude faster than Vray, but as it is a recent piece of software it has not gained popular traction The second approach is a video game editor, CryEngine, which provides significant interactivity but at reduced rendering quality. For both or either, training materials will have to be made so that a designer familiar with Rhinoceros and the Vray workflow will be able to, with the help of the training materials, use THE GRID to create photorealistic interactive real time visualizations. The training materials will go further: once the basics are set up, more advanced techniques will be detailed. Once the user is making the visualizations their benefits will become self-evident presence, interaction, and a sensory fulfillment combine to create a prototype of the experience, an ultimatype. The training materials will be packaged in a way that can be redistributed. That is THE GRID. [][][][][][][][][][]
The second semester therefore began with focusing on testing Octane and CryEngine as valid options for THE
78
GRID. I planned to look for pros and cons of each system, in regards to user experience: How many steps does it take to go from a raw Rhinoceros model to an interactive, real time, model with materials and lighting? How possible is it to add dynamic elements, like moving people, cars, or trees? How many individual pieces of software or plugins are required? Following that, I planned to attempt to quantitatively examine the output of each system to figure out if using one, what does a user lose from the other, and vice versa. The next step would have been to pick one, or both, and establish the nature of the training materials. This meant I had to develop tutorial paths: make basic, intermediate, and advanced tutorials sets. Each path could expand in detail but not complexity, the tutorials had to be easy to use, with quality videos and annotated diagrams and screenshots. I would have had to do a hardware spec analysis - not everyone knows what a GPU is, or CUDA. Lastly, I would have had to determine methods of distribution: digital - PDFs, YouTube, etc. or physical booklet, CDs or DVDs, etc. However, as the work went on the course expanded beyond that. The initial development of the tutorials as an
79
online resource was over about a third of the semester in,
so it was decided that I would continue with THE GRID by opening it to fellow students. This meant creating an environment on THE GRID and design challenges for the students to perform so that the true possibilities of THE GRID could be determined. A computer then had to be acquired since my laptop had continued to degrade in hardware quality and attempting to have the students use THE GRID on their own machines was a failure. With a computer set up and an environment made for the students, they were able to test THE GRID.
Fig. 4.4 A brief analysis of the possible avenues for testers of THE GRID. The most likely source of testers was colleagues, and the least likely were online people.
80
UnderlyIng assUmPtIons argUment
and
It was important, before work after the first semester could proceed, to determine any underlying assumptions I had about the context of the thesis and reconsider my core argument. The underlying assumptions of THE GRID are as follows: • Architects are in the business of making digital models before converting them into construction documents. • Architects’ use of digital models is far behind their use in other fields, such as medical imaging, geological surveying, simulations, and certain types of graphicsheavy video games. • The technology and software used in the School of Architecture at Carnegie Mellon University is indicative of technology and software in use in architecture offices worldwide. • Architects’ presentation of digital models is limited to static renders and pre-configured animations. • Architects have enough cash to consider upgrading
81
hardware (if needed) and software.
• If rapid or real time photorealistic renderings are effective in other fields, they can be effective in architecture. • Presence exists. • A digital model can be explored with a navigational interface effecting better understanding of the project; this is apart from looking at plans and diagrams. • Static images cannot represent a project that is meant to be physical. • Architecture is digitally stagnant and both the public and construction sectors are often misinformed as to the nature of architectural design and education. [][][][][][][][][][]
The core argument could be broken down into the following: • Architects need a way to experientially prototype the final design in the digital realm. • Architects - professionals and students interested in orchestrating space and time with programmatic and social implications. They deal primarily with visual and tactile experiences, but sound also comes into play. They delve into invisible properties, like the structure or wiring
82
in a building, only diagrammatically or by reference, i. e. leaving reasonable room for those features so that they can be developed when the project is converted into a format that can be physically constructed at full scale. • Need - architectural stagnation is brought on by reliance on old tools, practices, and paradigms. Therefore, a need arises to adopt more modern ideas. • Way - tools, methods, repeatable and reliable practices. • Experientially - the experience is the overarching theme to architecture - the sensory interface with space and time leads to memory which is understood as experiences. There is no way to experience a building before it is built, but photorealistic digital tools combined with interactive controls can be an effective surrogate. • Prototype - to find out what it is like before it exists. Think of cars, books, phones, and toys - all can have a rough full scale prototype developed rapidly during design. Architecture is too big to be fully prototyped, and prototyping a piece is like only printing a page of a whole book or only plastic-injecting a wheel of a toy car - it is accurate, but limited.
83
• Final design - while design continues until the last
brick is in place, in a best case scenario the major motions of the project are figured out before construction documents are made. The final design is a 3D, fully digital model. This is the case, as based off of the first underlying assumption. • Digital realm - fully computer simulated environment with physical assets (like textures) imported. This is important because, as assumed, the digital realm has been successfully used in other fields using advanced software that is currently available. The digital is here to stay and must be embraced.
84
the tUtorIals overvIew of tUtorIals
the
vIsUalIzatIon
The CryEngine tutorials at whatisthegrid.tumblr.com are the first step in analyzing how a user experiences THE GRID. They are a sequence of twelve tutorials, instructions for use, that guide someone only familiar with Rhinoceros, Vray, and has access to 3dsMax, through using THE GRID’s chosen software, CryEngine to arrive at a fully interactive visualization of a chosen model of one of their projects. Each tutorial outlines with text and image the basic steps of what needs to happen and what can happen on THE GRID. The website and the presentation of the tutorials are designed to convey a sense of fluidity, of breaking the rigid grid lines and uncovering the cells within. Website elements fluidly appear and shift, the mouse cursor produces a trail of tiny grid cells, the tutorials themselves are laid out in a flowing manner, and a sidebar available on each tutorial’s page allows for direct travel from tutorial to tutorial. The same website was supposed to eventually also contain design challenges which would have been part of testing THE GRID with potential users.
85
Fig. 5.1 QR code for the tutorials.
1. I n s t a l l a t I o n In this tutorial we will learn how to install CryEngine and some tools for 3dsMax. THE GRID requires CryEngine and 3dsMax on top
In the next screen, click on “DOWNLOAD NOW.” Above this button you can review the “System Requirements” and see if your machine can run CryEngine. CryEngine is a fairly flexible system, however, and can handle many different hardware setups.
of the core modeling software, which is assumed to be Rhinoceros. CryEngine is currently only available for a PC machine, so switch to one if available. For this tutorial, it is assumed that both 3dsMax and Rhinoceros are already installed. To install CryEngine, you must first set off some time and have a good internet connection, as the download is roughly 1.9 GB. Visit cryengine.com. The home page should have a top bar much like the one below:
At the last screen click “Download now” to initiate the download. Note the size below - even at high internet speeds this may take a while.
Click on “GET CRYENGINE.” A new page appears as below. Click on the “FREE SDK” monitor.
Once downloaded, extract the archive to C:\CryEngine\
86
Bin64\ if your operating system is 64-bit or C:\CryEngine\
‘Close.’ If Windows pops up a warning message hit ‘Cancel’
Bin32\ if your operating system is 32-bit using your favorite
and ignore it. 3ds Max has now gained useful tools for
file browser. Locate Editor.exe. Right-click this executable
exporting to CryEngine.
and select ‘Copy’. Then, go to your desktop and right-click anywhere and select ‘Paste shortcut.’ Now,
navigate
to
C:\CryEngine\Tools\
In the next tutorial we will learn what needs to be done within Rhinoceros to prepare it for export.
(Or
the
corresponding folder for an installation elsewhere). Rightclick CryToolsInstaller.exe and select ‘Run as Administrator’ with the shield next to it. Accept any notifications. A window like the one below should appear.
2. Ss e t t iI n g U Pp Rr h iI n o cC e Rr o Ss In this tutorial we will learn how to set up Rhinoceros to properly handle exporting to 3dsMax. There is actually very little setup that has to be done in Rhinoceros. Most of the CryEngine-specific settings are established in 3dsMax itself. To start off, begin by opening Rhinoceros:
Hit ‘Next.’ You can uncheck all but ‘Autodesk 3ds Max.’
87
Hit ‘Next’ again. The installation should be a success. Hit
type ‘Properties’ to bring it up. In the ‘Assign material by’ Wait until it loads and presents you with an empty file.
dropdown, note the various options. From here you would either select ‘Object’ from that menu (‘Layer’ is the default)
The only thing to make sure of with your Rhinoceros or set the material by layer, which is done in the layer model is that everything that you want to have a material panel by clicking the colored circle under the ‘Material’ has a material assigned to it either directly or by layer.
column for each layer. Both choices would end in the same window or tab. Above, third, is the appearance if you had chosen to set the material by object. You can either start modifying the default material, which will automatically create a copy preserving your changes and naming it with a default name, or you can press the ‘New...’ button to create a new material.
So, begin by opening a typical Rhinoceros model. On the Properties tab on the right, select an object and locate the ‘Material’ button. If the tab is not there, you can
88
Pressing that button will open the preceding window. Hit ‘OK.’ Either way, you will end up with a material assigned that is named ‘New material 001’ or similar. That is fairly
In the settings below, you can change the basic color by clicking on the colored rectangle to the right of ‘Color’ in the ‘Basic Settings’ section or change the texture by clicking on (empty - click to assign) in the ‘Textures’ section.
non-descriptive so a good idea is to rename it. It will be helpful to name the material after its real-word equivalent, as later on it will be easier to assign properties to it in CryEngine when all you will have is the name of the material (as object-material assignments can only be done in Rhinoceros or 3dsMax). Right-click the material name and select ‘Rename...” to give that material a name.
These settings and UV mapping are for more advanced modeling. If you want to explore, the texture mapping can be accessed by clicking the ‘Texture Mapping’ button at the top. In the next tutorial we will learn how to use CryEngine and how to load a template.
3. Ll o a d iI n g 89
a
Tt e m pP lL a tT e
In this tutorial we will learn how to work in CryEngine and how to load a template world prepared with a basic
scene into which models can be dropped. Begin by launching CryEngine.
like and why it is useful to work from a template. Click on ‘New Level’ on the welcome box to start with a blank level.
Here you will probably need to sign in to the CryDev
You will be presented with an options box. Give the
network. Go to crydev.net and click on ‘Register’ in the level any name and ignore the terrain settings and click upper right; this process is free and Crytek does not bother
‘OK.’
you with emails or whatnot. Once you have credentials, sign in and it will remember that computer forever.
You will be presented with another options box, this You will be presented with the default layout and a
time for the terrain. Keep it as it is and click ‘OK.’
welcome box. Note that I reorganized the toolbars a bit to
Now you will be presented with a view into the virtual
optimize space. For now, let’s see what a blank level looks
world containing a large ocean, sky, and ground some
90
distance under the water. The first thing to do is rearrange ‘D’ = rightward motion while holding the right mouse button some stuff (you may have noticed the previous views pivots your view in two angular dimensions (i. e. you can looked different than default). The default layout is below:
turn, but not tilt, your head). Near the bottom center of the viewport is a value that controls speed, next to ‘Speed.’ Control this to move faster or slower. Take some time to get used to this navigation mode. Now, while hovering somewhere in the air, press ‘Ctrl + G’. You will appear as a guy holding a gun with visible overlays and fall down into the water. This is the case because CryEngine is by default an engine for a first person
To save some space, merely click and drag the two lowest menus among all of the buttons in the top bar to the side so it appears like it does in these views. Also, feel free to drag out the bar on the far right out and close it. Next, note that you may have the info overlay turned on, as below. Click the ‘i’ in the upper right corner above the viewport to cycle through the displays until it is off or showing information that you find useful. This overlay outputs a number of data values that are significant to the
shooter series of video games and as such has the main character, controlled by you, hold a gun when he begins his life. This is one reason to use a template - to set up CryEngine so that it has you start as a peaceful, gun-less, overlay-less, civilian takes some steps and it is easier to start off with them already accomplished. To navigate, the same WASD + Mouse controls work here too. To exit the simulation, press ‘Esc’. Note how the
rendering engine of CryEngine but are largely not useful for camera returns to the previous mode at the same spot the purposes of THE GRID. The default navigational controls are: ‘W’ = forward
91
motion, ‘A’ = leftward motion, ‘S’ = backward motion, and
where you were as the character. This is called WYSIWYG (What You See Is What You Get) editing - it preserves real time and real location of you the controller when shifting from editing to testing.
that would simply have an endless plane of dirt, grass, concrete, or other simple material.
Now, on the main bar up top, under ‘Terrain’ select ‘Edit Terrain.’ In the new window, above under ‘Modify’ select ‘Set Ocean Height.’ In the pop-up, enter ‘0’ and click ‘OK.’ You can close the terrain editing window.
What is important to note here is that while the ground you fell on appears flat, it is actually a series of tessellated triangles. You can see this by pressing ‘F3’ and flying out until you see some distance across the ground. The area nearest you will have denser triangles, beyond that, in large squares, the triangles will merge into larger triangles, and farthest of all the triangles will be huge and take up large chunks of space. Pressing ‘F3’ again returns the view to normal.
Now you will see that the ocean has effectively disappeared. If you press ‘Ctrl + G’ now, you will either fall to the ground and survive and be able to walk around, or fall and die. If you die simply press the left mouse button to reappear where you died. This is close to a template
92
This plane can be molded and shaped into slopes,
So far we have gone over loading a level, navigation,
hills, mountains, and other geologic features. Integrating testing, ocean control, the nature of the terrain, and this technology into the design process could be useful
controlling sunlight. These are all basic skills and are
with training and planning (i. e. making room for the terrain enough to effectively use THE GRID to interact in real time in the original Rhinoceros model). Right now, it will only with a design model. Now the only things needed are come into play as part of various templates. below is an example of rough slopes quickly painted in CryEngine.
loading a template and importing the actual model. Download the template packages from here. Unzip the contents of the package file using your favorite zip program (Explorer can also do it natively) into the \CryEngine\ GameSDK\Levels folder on your machine. Now, either click on ‘Open’ under ‘File’ or hit ‘Ctrl + O.’ In the window that pops up, open up the ‘Templates’ folder and choose any template that you want.
To control the time of day, click on ‘Lighting’ under ‘Terrain’ up top. This will bring up an options window where you can set the time of day, north offset, latitude, and tweaks on dawn and dusk times.
Now enter the simulation (press ‘Ctrl + G’) anywhere near the ground. You will now see that you are gun-less
93
and without the overlays.
The last thing to note is that, when loading a template,
There are two Rhinoceros files, .3dm and .3dmbak,
unless you want to tweak the template itself, you should one CryEngine object file, .cgf, a 3dsMax file, .max, a always save as (in the File menu) when you drop your
material file, .mtl and an interchange file, .obj. The first four
models in. If you attempt to save one of the templates, you
files (the .3dmbak file is an automatic Rhinoceros backup
will get a warning to remind you.
file) and the last file contain largely the same information,
In the next tutorial we will learn how to import into CryEngine from 3dsMax.
namely the 3D information of the model. The remaining file contains material definitions that 3dsMax exports so that CryEngine can apply them to the model. Note that the path is \CryEngine\GameSDK\Objects\[name]. The folder name
4. I m pP o r t I n g The entire importing process is as follows: a model in Rhinoceros is exported as an .obj file; that file is imported into 3dsMax; the CryEngine tools in 3dsMax are used to export that model into a format that CryEngine can understand; then the materials are assigned in CryEngine.
must match the name of the .cgf file so that CryEngine, when browsing for the file, understands what is inside the folder. Whenever moving to CryEngine it is useful to either copy the most updated Rhinoceros file to this folder and work from it or to export the .obj file from Rhinoceros directly to the folder so that 3dsMax already has it set as
The first step is to prepare a project folder that the project location. CryEngine can access. CryEngine uses a separate folder for each asset, be it an object (a model), a level, or any other element, many of which are not necessary for THE
Begin by opening 3dsMax (with Rhinoceros and its model already open):
GRID. Take a look at the folder contents below:
94
While it is opening, switch to Rhinoceros. One
off. Do not worry about the ‘Export material definitions’
thing that is important to note is the CryEngine works in checkbox just below that; material export and handling will centimeters as the default unit. Whenever you want to be covered in the next tutorial. export, switch to centimeters as the model unit using the command ‘Units’ (accepting the automatic scale). Now, select what you want to export. Use the command ‘Export’ or find it in the File menu.
In the next window, again stay with the default setting, however make sure to later explore the effect of changing the mesh detail on curved surfaces.
Find your project folder and export the file with the same name as the folder. Use the default settings, as
95
seen below, making sure that ‘Map Rhino Z to OBJ Y’ is
Now that you have an .obj file return to 3dsMax. From
Next, make sure the units are translated properly. One
the default screen, open the menu at the upper left corner unit in 3dsMax equals one centimeter in CryEngine, so it and select ‘Import’.
is important that the units in 3dsMax are what they will be in CryEngine. In the ‘Customize’ menu at top, select ‘Units Setup...’. In the window that appears, set the units to ‘Metric’ and ‘Centimeters’ and click on the ‘System Unit Setup’ button. In the new window, set it so that 1 unit equals 1.0 Centimeters, as below:
Find your .obj model and import it with the default settings. Again make sure ‘Flip ZY-axis’ is off. You will be presented with your model in 3dsMax. Note that any NURBS surfaces are now triangulated meshes.
Now, press ‘M’. In the new window, first double-click the only entry in the list on the left, which should be named something
like
‘default
(standard)
[object_1,object_2,
{etc..}]’. This is a basic material applied to everything you exported, even if the original objects in Rhinoceros were from separate layers.
96
Next, in the center pane, double-click the object that
this tab, click on ‘More...’ and in the new window double-
has appeared in the blue header. You should now see click ‘CryENGINE3 Exporter’. This will open the CryEngine something like this:
export settings. To make sure it is properly synced with CryEngine, click and drag the black scrollbar on the right down until you reach a closed ‘Options’ section. Click on it to open it and then click on ‘CryENGINE3 Settings’. Wait a bit, then in the new window make sure it looks as below:
Next, where it says ‘Blinn’ click and select ‘Crytek Shader’. In the new options just below, click on ‘Physicalize’ and in the blank drop down next to that select ‘Default’.
It may be necessary to hit ‘Scan for builds...’ if it does
Now you can close the material editor. In the next tutorial, not have the CryEngine location set. Close this window and we will explore this window in more detail. For now, what you just did was tell CryEngine that everything that you exported is a physical object and must have collision generated for it. Without this step for everything that is solid, CryEngine will not create collision and when you test the model in CryEngine you will simply walk through the model.
97
Now, click on the little hammer on the upper right. In
scroll back up in the pane. Hit ‘Ctrl + A’ to select everything and click on ‘Add Selected’ under the ‘Geometry Export’ section. Now scroll a bit down until you see a blue ‘Export Nodes’. At this point you must save the 3dsMax model in the project folder, as otherwise the exporter will not work, if you have not already saved it. Once saved, click on ‘Export Nodes’. A window appears and the process should be
fairly rapid. Once it is finished you can close the window. Be aware that, once imported to CryEngine, it will update in real time the model file, so this import process needs to be setup only once per model, though you still have to manually export from Rhinoceros and from 3dsMax. Now, open CryEngine to a template level. In the RollupBar on the right, click on ‘Brush’. This will show the browser below. Navigate to your project folder and model.
At this point you can test (Ctrl + G) and walk around your model. If you want to move the model around, press the little cross of arrows in the top bar next to the mouse icon. In the next tutorial we will see how to replace that ugly ‘Replace Me’ with something else more fitting, as below:
If nothing went wrong, it should appear. Select it and drag it in anywhere in the template. It will appear at the proper scale with a default material that yells ‘Replace Me’.
98
5. mM a t e r Ii a l sS e t uU pP Now it is time to delve into materials and textures. In Rhinoceros, prepare your materials as usual, except make sure to name them separate and descriptive things so that they will be easier to track later on. For now only set a basic texture for each material; it will be easier to set
The model should import with the materials visible, 3dsMax importing the material file paths as defined in Rhinoceros. Press ‘M’ as before, when you set up a basic model. Double-click (or drag into the field) each of your materials, which should be named as you named them in Rhinoceros. Physicalize them as before. The next step is to combine these materials into one
other maps in CryEngine anyway. For this tutorial we will material that CryEngine can understand. What we will assume you have at least two materials, as the procedure
actually do is assign a different material ID to each set
for more than one material is different than the procedure
of objects, all of the objects which had the same material
for a single material.
assigned, so that the material within CryEngine can apply
Then, export as before, making sure to check the ‘Export material definitions’ option near the bottom of the export window. This will produce a separate .mtl file named the same way as your .obj file. Now, open 3dsMax as before (preparing the units) and import your model. Make sure ‘Import materials’, ‘Import into Mat-Editor’, and ‘Show maps in viewport’ are checked in the lower right section, as below.
the proper sub-materials to all of the objects. Each submaterial will be one of the materials in the model. The model will then have a multi-material applied. Arrange the materials in a visual way so that you can easily tell which will be ID 1, ID 2, etc. or simply remember how they should be, or name them with numbers. By default all objects have a material ID of 1. So for a model with more than one material we will have to set all objects of a material beyond the first with
IDs greater than 1
corresponding to which material they have. For the second material, double-click on the material
99
and click on ‘Select by Material’ on the end of the top bar, shown following:
Repeat this for each set of objects by material. Whenever you modify the model (by deleting the imported geometry in 3dsMax and importing again), you will need In the new window, each object of that material will be selected. Hit ‘Select’. The objects with that material will now be selected. In the right panel, click on the little blue rainbow and in the ‘Modifier List’ dropdown select ‘Material’ as below:
to reassign the material ID but not the material settings in 3dsMax (unless there is something you want to change about the material). This may get tedious, but there does not seem to be any way to export the material IDs straight from Rhinoceros without some kind of script. Now, right-click in the middle field and select under ‘Materials’ ‘Multi/Sub-Object’. Once it appears, double-click it and in the options on the right, click on ‘Set Number’. Set that to how many materials you have. Then, click and drag in the field from the right little circle of each material to each little circle on the left of the multi-material in the order that you determined. Note how the multi-material has ID numbers set for each material in order and updates when you connect the materials. An example using two materials is below:
In the ‘Parameters’ box which appears below, change the ‘Material ID’ to 2.
100
When you are done, have the multi-material selected that to the number of materials you have. For each suband click on the main viewport. There, select everything material that appears under that material, right-click and (Ctrl + A). Now, back in the Material Editor, click on the
select ‘Rename’ and rename them as needed, in order. Your
fourth button from the left in the top bar, ‘Assign Material to
material editor should appear as below:
Selection). If the order of the materials with IDs assigned and the ones connected was the same, the viewport should not change. If it does, find the error in ID assignment or connection and rectify it (you can either reassign IDs as before or change them in the multi-material options). Now, proceed to export using the CryEngine3 Exporter utility. Remember to save the 3dsMax session in your project folder in CryEngine’s folder (\CryEngine\GameSDK\ Objects\). In CryEngine, open a template level and open the ‘Brush’ menu. Drag your model to the level. By default it will display ‘Replace Me’ all over it. In the RollupBar where you just saw the browser, it should have changed to show various properties of the Brush. Next to where it says ‘Mtl:’ it should say <No Custom Material>. Click on
Here we can get acquainted with the material editor.
this. In the Material Editor that opens up, navigate to your Note how there are several sections: Material Settings, project folder and note how it has the original exported Opacity Settings, Lighting Settings, .mtl file there. Ignore that and right-click on your project
Advanced, Texture
Maps, Shader Params, Shader Generation Params, Vertex
folder and select ‘Add New Multi Material’. Right-click the Deformation, and Layer Presets. For the purposes of THE
101
new material and select ‘Set Number of Sub-Materials’. Set
GRID we will only use the first five except for ‘Advanced’.
The first section is only useful as far as setting material plugin for Photoshop which allows it to export texture files surface properties (how it reacts when you walk on or hit
specifically set for CryEngine’s use. However, it seems
it). Under ‘Surface Type’ you will find various surface types. that Photoshop’s default file saving creates files that work. You will set these as needed when you customize each The file type that the textures have to be is .tif, which is material. The second section may be useful for glass and water with the ‘Opacity’ setting, which is self-explanatory. Under ‘Lighting Settings’, nearly all settings will be useful. ‘Diffuse Color’ sets the overall brightness of the base material texture. ‘Specular Color’ sets the shinyness color. ‘Glossiness’ sets the spread of shine. ‘Specular Level’ sets the amount of shine. ‘Emissive Color’ does not seem to do much. ‘Glow Amount’ is the amount bright parts of the material will visibly glow.
available by default in Photoshop’s options for file type when you save a file. Open your texture files in Photoshop and save them as .tif (TIFF, at the bottom) in your project folder. I have found that a ‘LZW’ compression works; ignore the other options. Now, for each material, scroll to where it says ‘Diffuse’ under ‘Texture Maps’ and click on ‘...’ to the right of it. Locate your texture and set it. It may take a few seconds for CryEngine to import it, but when it does the texture should appear on your model. If you want, you can set the other maps, the treatment is similar to how it is in Vray.
The last group is where you set the actual textures Repeat for all of your materials. directly, by pressing the ‘...’ button.
If your materials appear too bright, remember that you
Now, select the multi material, have your model can darken the ‘Diffuse Color’ in the third section. selected, and click on the button in the top left, ‘Assign Item to Selected Objects’. Without texture assignments, nothing should happen, but now you can play with the glossiness or opacity settings. The reason we have not set any textures yet is that they have to be a certain file type. CryEngine does have a
If you want to adjust tiling for each map, you can click on the black triangle to the left of each map, then click on the triangle next to ‘Tiling’, and there set both ‘TileU’ and ‘TileV’ to whatever you want. If you have many materials or many models and you
102
cannot tell which material is being used where, you can
• Set materials in Rhinoceros. Export to 3dsMax and
use the eyedropper button in the Material Editor on the drag materials into the Material Editor there. top bar, fourth from the left, and click on the material in question in the main view. See below:
• Set material IDs for each set of objects based on its number in order of all the materials. • Create a multi-material that connects all the materials in order and assign it to all the objects of the model.
One thing to note is that, for some reason, CryEngine does not update the physical proxy (collision) when you make changes to the model before exporting from 3dsMax even though it updates material assignments. So, when you change your model based on input from THE GRID, reexport it from 3dsMax, and open it in CryEngine, you will need to place it again and reassign its material, otherwise the collision from the previous version will remain. I have found no other way around this problem.
• Export and open in CryEngine and create a multi material from scratch and assign it, setting all textures. When changing the model in Rhinoceros, only the following has to be done: • Set material IDs for each set of objects based on its number in order of all the materials. • Create a multi-material that connects all the materials in order and assign it to all the objects of the model. • Export to CryEngine, delete the previous there (even though it updated in real time) and place the model again, and reassign the material. In the next tutorial we will explore how to set up lights.
103
To recap, the following is needed for the first time materials are setup:
6. lL Ii g h t iI n g sS e t uU pP
geometry information.
In this tutorial we will learn how to work with CryEngine’s entities and create lights for the imported model.
Leave it alone for now, as now is a good time to learn about helpers. Helpers are visual guides that aid in setting up CryEngine scenes. Try pressing and holding ‘Shift’ to show world grids and the selected object’s relation to There is no way to import lights from Rhinoceros to those grids. If your light is invisible, hit ‘Shift + Space bar’ CryEngine so the best way is to set up lights directly in to make it appear as a little bulb. Lastly, press and hold the CryEngine.
space bar to show object names.
Open up your model and on the RollupBar click on ‘Entity’ and in the browser open ‘Lights’ and drag a ‘Light’ into your model. Note that may appear that your light is invisible. All lights in CryEngine are by default point lights (except the sun) and are merely light emitters, with no
104
Now that we have our light and can see it, it is time
To add a flare to a light, such as bright lights might
to familiarize ourselves with the settings. With the light create, click on the field next to ‘Flare’ and press the ‘D’ selected, examine the ‘Entity Properties’ section in the button. This will bring up the Lens Flare Editor. In the upper RollupBar. Many of the properties are self-explanatory:
left, click on the folder icon and double-click the only entry,
Active, Radius, Diffuse (color), DiffuseMultiplier, AreaLight, sample_flares.xml. In the left pane, open up the various PlaneHeight and PlaneWidth. To setup a realistic light, first change ‘Castshadows’ to ‘LowSpec’. This will ensure that its dynamic shadows are always available. By adjusting the radius, color and multiplier, or switching it to an area light, you can
categories and select a flare. To apply it to the light you had selected, click on the fourth button from the left in the middle portion of the top bar, ‘Assign Item to Selected Objects’. Close the Lens Flare Editor and click on ‘Flare Enable’ under ‘Flare’.
already create a varied set of lighting situations. To slightly improve the quality of the dynamic shadows, set ‘ShadowResolution’ to 2.
‘HDRDynamic’ makes the light brighter in the HDR, so the engine calculates it relatively to other lights. The sun is
To create a region of shadow or otherwise globally at about 3 for this value. affect a region without applying shadows from a specific point, activate ‘Ambient’. The color of the light will now affect the geometry around it nonspecifically. If you set the
light brighter.
color very near 0,0,0, you can darken areas. This could be
Lastly, to copy a light, or any object, press ‘Ctrl + C’
useful to tweak interior spaces that are not directly shaded
and move your mouse to where you want the new object
by the sun.
to be. If you press ‘Ctrl + C’ again, CryEngine will drop the
If you are making an area light, you can rotate using the ‘Select and Rotate’ tool in the top bar.
105
‘SpecularMultiplier’ makes highlights generated by the
object at the mouse and make a new floating copy that follows your mouse. In the next tutorial we will learn about vegetation, clouds, water, rain, and fog.
7. eE n v iI r o n m Ee n t sS eE t uU pP
Note how in the lower field there are already entries, probably called Grass and Trees, or just Grass. Before we
CryEngine’s environmental effects cover grass, trees, add more, click and drag in the main viewport anywhere clouds, rain and snow, bodies of water outside of the
there is grass or trees. This will select those instances.
ocean, global haze and local fog. Grass and trees are part of the same system, called Vegetation, and exist in some form in each of the template levels. For the purposes of THE GRID, we will not go into detail how to set up grass and trees, however we will learn how to add more to a level from the ones already existing in the level. Begin by opening any template level. In the RollupBar, select the second tab at the top which looks like a half pipe. In this tab, click on ‘Vegetation’.
With this selection you can operate as you would with an object, you can move it, delete it, rotate it, etc. Now, to add more, we use a process called painting. (Incidentally this is similar to how you make terrain). Click on any of the object entries, click on ‘Paint Objects’ above, and set the slider under ‘Brush Radius’ to somewhere in the middle of its bar. Fly up so you can see some ground and click and drag anywhere in the viewport. Objects of that type should appear where you paint.
106
Note that some objects would not appear under a
operation tools are.
certain brush radius. This is because their density is greater than the set radius, so the probability of them appearing in the brush area is 0. To delete, hold Ctrl and paint over what you want gone. Clouds are also something that is present by default in the templates and likewise do not need to be explored in detail. To add more clouds, switch back to the default tab in
There are different cloud presets, but the default ones are fine. Another way to add clouds is to change the HDR sky to a skybox, which sacrifices time of day lighting in favor of a realistic sky scenario. To change to a skybox, click on the halfpipe tab and then on ‘Environment’. Scroll in
the RollupBar and, under ‘Entity’, navigate to ‘Render’ and the field below to where it says ‘SkyBox’ and under that drag one of ‘Cloud’ and ‘VolumeObject’ into your scene.
‘Material’ and click on the field on the right, and there press the ‘..’ button.
The Cloud entity is what all of the templates have, except way up in the sky and scaled to cloud size. The VolumeObject is supposed to have superior shadowing and volumetric effects but I have found that it does not really
107
add much and is generally too bright. To scale either, use
This will open the Material Editor. Navigate to materials>sky and pick anything there. Next, go back to the RollupBar and in the same field click on ‘<’. This will apply the selected sky to the scene. Note that the skyboxes interact only partially with
the ‘Select and Scale’ tool on the top bar, where the other the sun and will not work well at night. Also, the custom
clouds might not appear to fit. To go back to an HDR sky,
entry into the level. There are some settings that could be
set the material to ‘sky’ in the same material folder.
changed but the defaults are fine.
Rain and snow are handled the same way. Both are special particle effects, however they are not advanced enough to interact with geometry (i. e. they still appear indoors) so it is best to use them only outside. To add either, we need to open the DataBase View. Under ‘View’ on the top bar, in Open View Pane, find ‘DataBase View’ and click it.
Rain, above, first appears fairly thick and faint, so it may be a good idea to scale the droplets down. In the RollupBar, under ‘ParticleEntity Properties’ find ‘Scale’ and adjust it as needed.
In this window, click on the ‘Particles’ tab on the top
Snow, above, renders better and may be a good
and then click on the folder icon on the top bar on the left.
size visually from the start. You can also try adjusting
Double-click on ‘Libs\particles\weather.xml’. This will load ‘CountScale’ in the RollupBar, though it may take a few a folder structure. Open up ‘Rain’ or ‘snow’ and drag either
seconds for changes to appear.
108
Bodies of water are easy to set up. To create one, in the default RollupBar pane click on ‘Area’ and then
To adjust global haze, we need to use the Time of Day dialog. Find it on the left:
‘WaterVolume’. Click multiple times in the viewport to draw a shape for the volume in plan. On the last point, doubleclick to finish the shape. It may be necessary to activate the advanced properties. It might be useful to rearrange the tabs to appear as below, to make it cleaner. On the ‘Tasks’ pane, click on ‘Toggle Advanced Properties’. Then switch back to ‘Parameters’.
Right now the volume exists but it does not have a material set. Just like with a model, set the material to anything in materials>water>watervolumes. The volume will now become visible. By default it has a thick fog inside it and a depth of 10. In the ‘WaterVolume Params’ section in the RollupBar, play with the fog settings and Depth as you see fit.
Here, scroll down to the Fog settings. The most important thing to understand here is that any and all of these values are animated over the length of a day. This is shown in the graph field on the left. Click around on a few of the values. Note how many have different curves plotted
109
over time (top of the graph) from 0 to 24 hours.
time. The most basic settings of the fog that you would want to change are ‘Global density’ and ‘Height (top)’. They are self-explanatory, but have a significant impact on the presence of the haze in the level. Also, ‘Color (bottom)’ and its multiplier are useful to adjust. If you want to simulate fuzzy shadows as if the sun was shining through a fog, scroll all the way to the bottom and set ‘Shadow jittering’ to some high value.
Basic operation of the graph window is as follows: middle-click and hold to move, double-click on the line to add a point, double-click on a point to delete it, click on a point to select it, press the red ‘X’ immediately above the graph with a point selected to clear all points but that one, and click on the two sine graph icons with brackets in the top right of the graph to zoom to all the points if you get lost. You can also click and drag the pink line to adjust time. It is not as precise as it is in the ‘Lighting’ menu under ‘Terrain’ up top, but can be useful to see the effects over
To create local fog volumes, find the ‘FogVolume’ entity under Entity, Render, the same place where the clouds were. Drop it somewhere and give it a size. By default the volume is an ellipsoid; to make it a rectangular volume, change ‘VolumeType’ to 1. Adjust ‘FallOffScale’ and ‘GlobalDensity’ to change the thickness and softness at the edges to produce the desired look. I have found that setting ‘HDRDynamic’ to 1 makes it appear more natural in
110
response to the sun.
In the next tutorial we will learn terrain editing basics.
8. eE d iI t iI n g t h eE t eE r r a Ii n In this tutorial we will explore the basics of terrain painting and layer painting. There are three main areas of terrain editing: ‘Modify’, ‘Holes’, and ‘Layer Painter’ (and the separate option of raising or removing the ocean):
understand the values relative to the unit scale. To switch from rising to lowering, hold Ctrl. To set a height for ‘Flatten’, use ‘Pick Height’ at a spot where you want the height to be. Be careful with using the ‘Enable Noise’ setting: it may overpower the Rise/Lower switch, making it appear that you can only raise terrain.
These options are under the half pipe tab in the RollupBar. The first, Modify, is the basic set of brushes that can rise, lower, smooth, and flatten terrain. Most of
111
the options are self-explanatory. Play with the settings to
To add an ocean or change the height of the water,
The last option is Layer Painter.
open the Terrain Editor under ‘Terrain’ in the top bar, and then ‘Edit Terrain’.
In the new window, under ‘Modify’, take note of two options near the top: ‘Remove Ocean’ and ‘Set Ocean Height’. These can come in handy if you want to add an ocean or raise it, or remove the ocean altogether. The next option is Holes. This tool allows you to cut out holes in the terrain.
This option allows you to paint different material layers on the terrain using any custom color and brightness. To set the material layers themselves, click on ‘Texture’ on the left, between Terrain and Time of Day.
This is useful when your project cuts into a hill or cliff, or the ground itself. Adjust the brush to cut out larger holes, and switch to ‘Remove Hole’ when you want to restore the terrain.
112
This will bring up the Terrain Texture Layers window.
want, so play with the color picker and the ‘Brightness’
In this window, all of the template levels already have value until the layer looks like what you want. Examine layers set up, but we will go through the steps of loading a it at different times of the day. Also, adjust the ‘Altitude’ material and texture. Both are necessary because at long and ‘Slope(deg.)’ settings to limit painting only on terrain distances the terrain layer uses the layer texture and at
patches that meet those conditions. When you settle on a
close distances it uses the material, to maximize efficiency color for a layer (you can change layers in the lowest field), and lessen load on the renderer. Click on ‘Add layer’ on the left. A new layer will appear
click on ‘Save Layer’ to assign that Color and Brightness combination to that layer by default.
appearing as a grey and white checkerboard pattern. You can double-click on ‘NewLayer’ to rename it. To set a layer texture, click on ‘Change Layer Texture’ in the lower left. In the new window, navigate to GameSDK/textures/terrain (it should also be a default favorite at the top of the window). Here pick a default texture. Ideally the layer texture should be a detailed, almost satellite view, approximation of the terrain as a whole for that material, but in general these rough default textures will work. Now, click on the blue ‘Materials/material_terrain_ default’. In the Material Editor, navigate to materials>terrain and pick something appropriate. With that selected, go back to Terrain Texture Layers and click on ‘Assign Material’. Now the layer is ready to be painted.
113
In the ‘Layer Painter’ mode, try painting. You will note that by default the color does not really match what you
There is actually another way to set color, a way to bypass having to paint the layer (though it only changes color, so the material still has to be painted somehow, i. e. painted first as some color and then this other method is applied), but it is troublesome and is not necessary for now. In the next tutorial we will explore interactive elements.
overvIew of tUtorIals
the
InteraCtIon
While the first eight tutorials mostly focused on issues of visualization, the last four focused on interaction and coaxing presence out of THE GRID. CryEngine offers a wide range of interactive capabilities, though designing from an embodied viewpoint was not quite possible. Using the rudimentary design tools could only be done in the editor mode, though AI simulation could be run that would interact with added volumes or volumes that were moved or stretched. While within the game mode, agents could be made to interact with the user if the user moved close to them. Each time the user enters the game mode the simulation is different and unique. This presented an interesting scenario because that meant that not the same model was experienced from one use to another. Even the same person on THE GRID on two different occasions would get a different experience the second time. Lastly, the interaction extended to environmental effects, like wind and the sun. The sun could be made to move at an accelerated pace; this was the setup during the final review.
114
9. I n t e r a Cc t I v e e l e m e n t s
explanatory.
While CryEngine can have a myriad of interactive actions and events, due to the limitations of THE GRID they will not be explored here. Instead, we will only look at some very basic interactions: boids, parting grass, and water ripples. Boids are simple agents that look like birds, insects, and small animals that move around naturally but run away when you get close to them. To place some boids, in the RollupBar under Entity open up the ‘Boids’ folder and place something from there anywhere in the viewport. That is all that is needed; the boid will become active by itself.
Flocking lets groups of boids generated by one boid entity to group together as a flock. Another interactive element is parting grass. This is not very spatially interactive but is nevertheless a feature that adds to immersion. The grass models used by most of the templates do not do this (except for Island, you can see this effect there), but if you were to paint vegetation with one of the default CryEngine grass blade models, they would move away from you when you walked over them as if you were pushing them aside.
115
In its options, there are only two things of great value, ‘Count’ and ‘EnableFlocking’. They are more or less self-
The last interactive element that we will cover is water
The Launcher is located in Bin32 and Bin64 in the main
ripples. This effect happens automatically when you walk CryEngine folder, depending on which fits your operating into either a river (like in the River template) or a body of system. It is called ‘GameSDK.exe’ and exists separate water, except for the ocean.
from the main editor program. The first and only thing that needs to happen in the level that contains your model is that you have to add a spawn point. This is the location where a user will appear when loading your level to explore your model. Locate the ‘SpawnPoint’ under ‘Others’ in the Entity browser in the RollupBar and drag it in to where you want a user to begin.
Again, there are many more diverse interactive elements that can be made, like doors, elevators, wheeled objects, and so on, but learning those represents an extensive investment into the system and for the purposes of THE GRID will not be attempted here.
10. P Uu b l iI s h iI n g In this tutorial we will learn about publishing - packaging the level so that it can be opened in the Launcher program (the gateway onto THE GRID) so anyone could explore your model.
116
Note that it looks like a little figure in a box. The box is very close to six feet in height and a meter square at the base. The direction the figure is facing is the direction the user will start in.
That is it! It is technically possible to strip the entire CryEngine installation into a portable format, saving only the files needed for a particular model and level, and then to setup the Launcher so that it immediately loads the model, but there is, at the time of writing, no simple way of doing this. A tool did exist, but it expired.
The only other step is an option in the file menu. Find ‘Export to Engine’ or hit ‘Ctrl+E’. This will optimize the files for better loading in the Launcher. When you load the Launcher, select ‘Levels’ and find
117
your level. Ignore the templates as they will spawn you under the ground.
11. Aa d v aA n Cc e d : Tt h e d e s iI g n e r tT o o l The Designer tool is a way within CryEngine to create simple volumes that are fully colliding and can have a material applied to them. It may be useful for quickly blocking out an area.
The Designer tool can be accessed from the RollupBar: across a ditch and another with the arrow going over a ball in the ditch. The first is if you want to start on terrain and the second is if you want to start also on a brush. The second is usually the better option. When you click on ‘Designer’ and mouse over the viewport, you will note that your mouse cursor changes. From here you can immediately click and drag to create the base of a shape, ending when you stop pressing your mouse, and then move your mouse to set the height. Under ‘Create Brush Parameters’, the tool can be set to create a small variety of shapes: Box, Cone, Sphere, Cylinder, Plane, and a custom extruded Shape. For the Cone, Sphere, and Cylinder there is a setting for the number of sides. Before we start placing the entity it is important to make sure the start point will be where we think it will be. CryEngine needs to be told to recognize when your mouse is over terrain or a brush, otherwise it will default to the world grid, which may be far below your terrain. That is done by a button on the top bar:
There are two buttons: one with a little arrow running
For example, if you set it to Cylinder and set the Num Sides to ‘6’:
And drag a shape in the viewport, you will get
118
something like this:
ExtrudeSrf in Rhinoceros - you select a surface and move your mouse to extrude. Note that the direction of the extrusion matters, as extrusions into a shape will make backward-facing sides which will be invisible from one side and will not be very realistic. You can experiment with the other tools to see if one does something useful to you. The other, more useful, function of the Edit Mode is that you can select vertices and surfaces in the viewport
In the next section, ‘Enter Designer’ is a button labeled and, using the movement and rotation controls on the top ‘Edit Mode’. After clicking this
you will be taken to a
bar, move and rotate them to fine tune a shape.
geometry editing mode that acts on any Designer shape you have.
Pressing ‘Esc’ will take you out of Edit Mode. There are some tools under the ‘Extra Tools’ section, but they seem to have problems. In general however, you will do your design modeling in a separate program. Here are various controls; the really only useful one
119
you would need is Extrude, which functions similarly to
12. aA d v aA n Cc e d : bB aA s iI cC Ff l o w gG r aA Pp h
If you are in one of the templates, you can expand Entities > Defaults > Cloud3 in the file structure on the left. Clicking on ‘Cloud3’ will load that Flow Graph into the
The Flow Graph is a means within CryEngine of center pane. controlling actions, actors, and events. It is very similar in style to Grasshopper, if you are familiar with that, or any other visual coding environment. The Flow Graph is actually already being used in each of the templates to remove the HUD and the player character’s weapon. To open the Flow Graph, click on the ‘FG’ button on the left. This will open the default Flow Graph window:
Here you will see four components which are connected: Game:Start on the left and Inventory:ItemRemoveAll and two of Debug:ExecuteString. The first component is the event of the game starting and will execute anything connected to its output at the start of a simulation. It leads to all three of the other components. The second component empties the character’s inventory, removing his gun as he always has a gun by default. The last two components run two console commands: hud_hide 1 and godmode 1. The first command hides the HUD and the second command makes
120
the character impervious to damage caused from large falls (this is to help testing should you fall out of a window or something). You can click and drag each component to reposition them. To connect a component to another, click and drag from one arrow on one side of a component to an arrow on the other side of another component. To break a connection, click and drag the end of a connection (with the arrowhead) to an empty spot and let go. To add a component, right-click in the field and select ‘Add Node’. This will produce a huge list of actions, but the basic operation is simple. While the Flow Graph is very powerful within CryEngine, learning its ins and outs would take a very long time and for the purposes of THE GRID there are only two uses that are useful to know. The first use is controlling the rate of the sun and timebased changes as set in the Time of Day window. Navigate to ‘Game’ and then click on ‘Start’. That will add a Game:Start node where you clicked. This is always needed if you want something to happen at the start of the
121
game, or to be always true.
Now, add a Time:TimeofDay component. Connect the output of Game:Start to SetSpeed of Time:TimeofDay. This means that once the game starts the speed control
of that second component will be activated. That second
AI Flow Graph that we will make. In general, Flow Graphs
component is used to set the time of day and the speed need to be attached to something in a level for them to save of the day. To change the speed, either double-click on ‘Speed=1’ or click on the component and on the right change the number next to Speed. That is it for this Flow Graph; once you start a simulation the sun will move faster or slower depending on how you set it. You can also use
with the level. So, from the RollupBar drag a ‘Human’ entity under AI > Characters and a ‘FlowgraphEntity’ under Default into the viewport from the Entity section.
this component to set the time of day to a specific value (just also set the output of Game:Start to SetTime in this component). The second use is a basic AI setup. The CFA template has more advanced AI using the Flow Graph, but explaining that will be too complex for these tutorials. You are welcome, however, to examine the Flow Graphs in that template. A basic AI setup involves adding a navigation area where AI will operate, adding a character and a location for him to move to within that area, adding both as components
Also, drag a ‘TagPoint’ from the ‘AI’ section. Still in that
in the Flow Graph, emptying his inventory so he’s less
section, click on ‘NavigationArea’ and click in the viewport
violent (though he will still pretend that he is carrying a gun)
to draw a shape. Make sure helpers are turned on to see
and pacifying him, and telling him to move to that location.
it (Shift+Space).
The template Flow Graphs are attached to a cloud in
Select the NavigationArea and in its RollupBar options
each template but for this setup we will be more general
check ‘MediumSizedcharacters’. This is important as it will
and use a FlowgraphEntity, which is just a container for the tell CryEngine to build a navigation mesh for people-sized
122
AI (as opposed to vehicles).
That’s it for this setup. To test it within the viewport you can click on ‘AI/Physics’ to run a simulation of the AI within the viewport or test it as usual.
Now, click on the FlowgraphEntity and scroll in the RollupBar until you reach the Flow Graph section. There, click on ‘Create’.
The AI will operate as expected if you test in the viewport, but if you test in game then the character will react in a hostile manner towards you and try to shoot you.
In the new window, hit ‘New...’ and type in a name for a group. This name does not matter as long as you can recognize it. Hit ‘OK’. This will bring up the Flow Graph window. You will see on the left that there is a new entry under Entities > [your group name]. Now go back to the viewport. Click on your TagPoint and in the Flow Graph field right-click and select ‘Add Selected Entity’. Also right-click and add the AI:GoTo and Game:Start nodes. In the viewport, select the Human and in the Flow Graph field right-click on ‘Choose Entity’ in the first node and select ‘Assign selected entity’. The only connections you have to make are the output of Game:Start to Sync of AI:GoTo and the pos of
123
entity:TagPoint to the pos of AI:GoTo.
This is not very useful for THE GRID so we have to placate him. The way to do that is to change his faction (factions are groups of AI that reach in ways to each other) and empty his inventory (take away his weapon). Click on the Human and in the RollupBar scroll down
until you reach the ‘Entity Properties’ section.
Then click ‘OK’. You will need to do this for every new Human you add, or just copy them as needed. That is it for some basic AI. You can try experimenting with moving the TagPoint and having the characters try to follow it to get to it, or create spatial challenges for the There are only two thing to change here, EquipmentPack characters to attempt to maneuver. and Faction. Click twice on the ‘Grunts’ and set it to ‘Civilians’, as below.
Now, click on ‘Player_Default’ right above it and then on the ‘...’ on its right. In the new window, the only thing you need to change is the top left dropdown, where you need to change it to ‘Empty.’
124
the temPlates Several basic templates were also developed for work on THE GRID that were referenced in the tutorials. The idea was that a user learning THE GRID would try their models in the templates first before figuring out how to create an environment from scratch. The templates, of course, could also themselves serve as usable environments, however a full project would have a real world context that would need to be accounted for. The five templates are: Clean Slate, Island, River, Road, and Trees and Hills. Each one focuses on one basic condition that a project may fall into. They use a variety of CryEngine features and are bare enough to inspire modification or expansion. These templates start with hints of experience that can grow with the user’s interaction, and then direct the user’s imagination. Clean Slate covers any project that does not, or does not yet, have a context. It’s basic features are a lack of features and clouds in the sky. The ground is perfectly flat for several kilometers, covered only in sparse grass.
ocean. It is covered in trees that react to wind. The island itself is a mixture of flat and hilly areas, with the trees changing density throughout. River is a modification of Clean Slate because it adds a moving river, trees, and haze. The wind affects the trees and the water - objects in the water move with the current. The haze adds an atmospheric effect and reacts with the sun. Road is a simpler modification of Clean Slate. It adds a long and winding road with two rusty cars by the side. The road is a tool in CryEngine that can be drawn in any shape the user wants with any texture, and it sticks to the terrain. Lastly, Trees and Hills is a hilly plain with trees. Much like Island, it features a combination of hills and flat areas. It heavily features trees and their interaction with the sun and wind.
beIng on the grId dIffICUltIes
of the
tool
The fundamental and irrevocable problem with designing a tool is finding the user for the tool. It is impossible for the creator of the tool to design the tool to his or her own metrics and still have the tool register with other users. It is not the same as it is with designing a building - in a building subjective decisions are balanced against a wealth of experience in what works and what does not. With a tool, one must remain objective to facilitate the greatest possible audience and user base for the tool as well as allowing a certain amount of flexibility for the tool so that it can be diversified and built upon. With THE GRID, the visualization software will only be used in the way the users see fit for whatever projects they use it for. The only way to understand the potential use is to have users actually use THE GRID and reflect on their usage. That way the full meta-pipeline of creator-tool-usercreator-tool will become visible and the even greater thesis framework will become apparent. For the determination of the argument depends on the identification of the words and the structure binding them together. THE GRID is the
127
Fig. 6.1 Invitation to get on THE GRID.
words and the structure is the way it used and the way its
use is analyzed. [][][][][][][][][][]
As the users interacted with THE GRID, I compiled and documented their interactions. The aim was to determine if a set of users with varied experience with digital visualization can gain any benefit from using the tools of THE GRID and if an impact on the greater architectural practice can be predicted. If the users find that THE GRID is only another tool and do not understand how it can supplant static renders as an experiential prototype, that will tell me that THE GRID is not yet ready to be used as a design or
Fig. 6.2 THE GRID is developed only through the addition and accumulation of many separate experiences.
as a presentation tool. If they do find that THE GRID adds to their design process, that will tell me that THE GRID is an element of the design process that is missing and that can positively impact the communication that architects and spatial designers can have with their clients. Of course, both paths are subjective assessments that I will make as judgments on my thesis work.
128
s P r I n g 2013 P r o j e C t While the users were to go through the design challenges I planned to perform a parallel series of developments using THE GRID to revisit my last complete studio project, the Systems Integration studio, and use THE GRID to see if a different design emerges because of the altered viewpoint. That project was a large office building in the Strip District. This exploration would have involved me resurrecting the project files from my archives. I would have used renders and analysis I had done during the time of the studio as a control condition. Comparing my use of THE GRID with this project directly by these means would have allowed me to see the impact of THE GRID. One caveat is that THE GRID is inherently real time, so it would have be en difficult to compare a moving experience to a static one. However, as the design challenges were developed this separate exploration was dropped in favor of acquiring external feedback on THE GRID. Only some Octane renders were done.
129
Fig. 6.3 Octane renders of the project. At the time of the studio, Vray and Rhinoceros 4 were not able to render any single image because the geometric detail was too high. Each of these renders was under two minutes.
130
the desIgn Challenges Each of the users performed a series of short design challenges using the developed tutorials as a guide as to how to use THE GRID. Each design challenge used a part of THE GRID and related it to a style of rendering that was previously only done via a Rhinoceros -Vray-Photoshop/ Illustrator pipeline. In actuality the users did not follow the challenges to the letter and used THE GRID in a more flexible manner. After performing each design challenge, each user reflected in an online questionnaire on their experience. Each of the challenges is reproduced on the facing page. The challenges were broken down into three key experiential aspects: the vantage point, temporality, and activity, respectively titled MOVE, SHIFT, and IMPEL, with the titles increasingly focusing on a kind of movement that is implied in THE GRID’s experience. Each challenge was further broken down into ‘what’, ‘why’ and ‘how’ sections to facilitate understanding and learning.
131
Fig. 6.4 Diagrams showing simplified potential results of the challenges. The top shows how a user can place models into a context and navigate around them. The middle shows the same model with searons and time changes. The bottom shows users interacting with the model.
The challenges were available as printouts during testing for the students to peruse.
132
User InteraCtIons The testers were provided with a relatively detailed model of the CFA studio space to perform their challenges and explorations in. The model was complete with tables, chairs, computer banks, trash cans, and wood trim. It was bounded on all sides - the doors were flat, blocked, regions and the windows were impenetrable. Since the users accessed the model directly from CryEngine, they could perform some of the actions the tutorials detailed, such as controlling the sun and lighting Fig. 6.5 Analyzing the recordings involved retracing the steps of the users, taking into account time and the duration of events, and generally an interplay of factors that are not easily extracted just by looking.
and placing objects in the space. Another aspect was the hardware - the School of Architecture provided a computer station for use of THE GRID. This made it much easier to get students to participate because they now no longer had to provide a computer to use for THE GRID. It also leveled out the hardware component because everyone used the same machine. The machine itself, however, was not very good but THE GRID was still able to run fairly well thanks to the flexibility of CryEngine.
133
Following are analyses of the user interactions.
Fig. 6.6 A comparison of the hardware the school provided, left, and my own desktop setup, right, both running THE GRID in the studio model. My desktop used the new graphics card that I purchased to use for this thesis, but even overall it was a better set of computer hardware. The major advantage the school computer had was that it was in studio and thus accessible to the students, while my desktop would have had to be lugged over every time it was needed.
134
The first category of interaction centered around the userâ&#x20AC;&#x2122;s cognition. The most important part about seeing on THE GRID is that it allows for the user to gather data and gives the user visual cues by which they could respond to their interaction on THE GRID. This data gathering, cognition, extends beyond that normally provided by a static image. In a static image, the general position of the user is determined through hints: the size of a door, or the closeness of a wall. On THE GRID, the data gathering extends towards a displaced proximity, objects that are nearby but outside the view,solar and environmental adjustment, and collision in general. The visual cues that the user sees allow the user to move and interact. They are the most basic clues that THE GRID provides - they connect one viewpoint to another and are needed to make the experience coherent. The closeness of an object or a distant feature becomes important when the user is able to move, but reacting to that cue informs the motion itself. Likewise, objects disappearing and reappearing give weight to one particular Fig. 6.7 Examples of user interaction related to responding to visual cues.
view over another. As things change, the totality of the experience is compared to any one position. The user learns to favor viewpoints.
135
Fig. 6.8 Frame strip of a continued experience on THE GRID. Here the user navigates through the space, but is sometimes impeded by objects and spatial features.
136
A deeper level of interaction comes when the user realizes they are in the space. With this realization comes a sense of responsibility for the userâ&#x20AC;&#x2122;s motion and actions. A sense of the height, the thickness and the mass of the user sets in. Also, while in motion, the user realizes their momentum and recognizes that they will take up space in the area in front of where they are when they move into it. This becomes extended into a knowledge of the space where the user falls into learned paths of motion. Motion becomes emotion. In that way, experience becomes stored in the model. Since experience is always personal, this storage takes a piece of the user and the user leaves that bit in the model. They become invested - their interaction becomes precious. This type of caring is both beneficial Fig. 6.9 Examples of user interaction related to attachment to the model.
and detrimental. Beneficially, it lets the user get used to the model and find favorite areas, as well as determine what needs revision based more on the experiences they have had than the experience they have at that moment. Detrimentally, they lose sight of the design process and want to avoid drastic changes to the model.
137
Fig. 6.10 Frame strip of a continued experience on THE GRID. Here the user makes changes to the model which influence the position of the user, producing desirable and undesirable conditions.
138
To fully allow the experience and the design process to take full advantage of THE GRID, the creativity of the user has to be limited. With a CAD viewport, there is too much to be developed and thought of - the mind does not know what to focus on, or how to tell what needs creativity or imagination and what does not. With a static render, the experience is stifling and constrained - too much has been taken away from the control of the user, too much is already defined.
Fig. 6.11 Examples of user interaction related to imagination in the model.
THE GRID strikes a balance between the unresolved creativity of a CAD viewport and the constraint of a static render. The focus of the experience is neither on the geometric and analytical qualities nor on the visual and photorealism. The experience is partly generated by the user, partly the result of choices made in the design, but it is from those choices that the experience develops. By selecting a narrow aspect of the projectâ&#x20AC;&#x2122;s experience, and then letting the user control the exposure, creativity is allowed to grow. [][][][][][][][][][]
The user experiences, even though limited in hardware, allowed me to truly see how users responded to THE GRID.
139
From there, I was able to make my final argument.
Fig. 6.12 Frame strip of a continued experience on THE GRID. Here the user experiences limitations within the model, but those limitations allow for imagination, though the amplification of the rest of the model.
140
143
Fig. 7.1 The immersion of THE GRID.
the fUtUre the grId
of
Part three - table
Contents
of
Table of Contents
144
Conclusion
145
The Final Review
145
Looking Beyond THE GRID
151
Appendix
155
Sources
155
Terms
164
144
ConClUsIon the fInal revIew The final review took place in the Miller Gallery in the Purnell Center for the Arts on the campus of Carnegie Mellon University. This was the first time, as far as I knew, that the School of Architecture had hosted the thesis presentations either in a gallery in general or in a gallery on campus. The scale and the responsibility represented in the choice of venue meant that the presentation for THE GRID could be no simple plot and model setup. It was agreed between me and my advisors to frame the presentation as a series of layers. The layers were Fig. 7.2 The initial mock-up of the presentation in the smaller 3rd floor space.
both the position of THE GRID in architectural practice and its position within the study of sight and representation. They were as follows, from out to in: VISION – How it looks like. This is the outermost layer of perception, the initial acquisition of visual data by the user and their eyes. This is when the user answers “what it is” without yet understanding any meaning or purpose. DISCERNMENT – What it means. Once the user recognizes what it is they are looking at, they can delve
145
Fig. 7.3 The updated design reflecting the new location.
deeper and understand the meaning of what they are
looking at. This requires a certain amount of involvement on the part of the user; the designer invests in the design. AGENCY â&#x20AC;&#x201C; Influence and the imagination of the designer. At this stage the user gains control of the underlying forces which shape the meaning of the visible. This is the unpredictable control imparted upon the design by the designer. PRESENCE â&#x20AC;&#x201C; The designer in the design, his or her projection of will. This is the thinking and the internal motion of the designer as it exists and persists in the design. A user can only reach this level if the design is fully realized and fully draws the user in. Those were external layers - each one also corresponded to an internal layer: in respective sequence: COGNITION, EMOTION, IMAGINATION, and DASEIN. The three outermost layers were on plots, while the PRESENCE/ DASEIN layer was THE GRID itself - an interactive station that anyone could use. The plots were mounted on boards that I constructed prior to the presentation. The internal layers were all on one large board while the external counterparts were on
Fig. 7.4 Photos of the development process. The top two on the left show the initial assembly in the CFA building and the Miller Gallery, the top right photo shows two boards assembled in the Miller Gallery, and the bottom photo shows all of the boards assembled but without the plots or bumpouts, or the projection and computer.
separate overlapping boards, forcing the user to change their position to experience them all. They also had
146
bumpouts for each diagram - cardboard extrusions that enhanced their three-dimensionality. The presentation days were Friday, April 25th, where there was an opening reception during which the public and other students could view all of the projects, and Saturday and Sunday the 26th and 27th, during which each thesis student got an hour to present. I went on Sunday. Fig. 7.5 The presentation setup on Friday, during the opening reception.
Fig. 7.7 QR code for a short video showing people interacting with THE GRID on Friday.
The presentation changed between Friday and the weekend. On Friday, there was a monitor set up where videos played demonstrating the power of THE GRID. On the wall next to the external plots was a plot summarizing the point of THE GRID. For the weekend, the monitor went away and the summary plot replaced it. Due to the layout of the space, during the opening reception seeing the monitor as people entered the gallery drew them in, while over the weekend the core argument of the thesis was more important so the summary plot became the first thing to draw people in. Also, a camera was set up in the corner to record various interactions of users with THE GRID. It was also
147
Fig. 7.6 Details of the presentation setup. The left shows the interactive station, with mouse and keyboard and instructions for use. Behind it is my computer, using the new graphics card I purchased for use with the thesis, with speakers. On the right is a poster summarizing the point of the thesis, which moved places between Friday and the weekend.
used to record THE GRID breaking down on Sunday afternoon. [][][][][][][][][][]
Fig. 7.8 QR code for a video comparing Vray to THE GRID that was shown on Friday.
The main points from the review were: â&#x20AC;˘EMBODIMENT - the role of the body in forming ideas. This approach was brought up as an interesting angle on my thesis, though due to the technical aspects of my thesis I never had the chance to explore it, though the idea of presence and dasein, and immersion in general approach the concept of embodiment in a roundabout manner. It was always important during work on my thesis that the light be shined on the body in the render - that the designer is not just a disembodied eyeball floating around, rotating the model or design or building indiscriminately and omnipotently. It was poignant when, discussing my thesis with a colleague the lack of the orbit rotation from Rhinoceros was brought up - on THE GRID, the user moves, not the model, and this kind of motion, where your position matters, is natural and real. An orbit mode is unnatural and even impossible. â&#x20AC;˘GAMIST - the idea of being gamist was brought up. This was an odd point that I thought had already been cleared up, but a new audience at the review brought new viewpoints. Gamist in this sense means being biased against video games or their technology without truly comprehending the possibilities emerging within the video game engines available today. Efforts like THE GRID, which
Fig. 7.9 Photos of the presentation boards. On top is the left board, which was mental/internal and thus flat. On bottom were the three physical/external boards, which were 3D and changed in space. The two left ones could only be seen if the observer moved.
148
place video game engines in the context and practical use of professional practice, should influence gamist attitudes, teaching by showing that this technology has power and potential. â&#x20AC;˘REPRESENTATION AND THE THING - this point pierces Fig. 7.10 Photos of the presentation setup over the weekend. Note how the TV was removed and the summary plot was moved in its place. THE GRID was still fully interactive.
to the core of THE GRID. There is a fine line between the way something is seen, and the thing itself. What THE GRID does is it gives meaning to digital models, but at every step it knows that the digital model is itself a representation, a representation of a real building not yet built. Therefore there is a responsibility on the user to realize that they are not designing the representation using a thing (the digital model) but using a representation - with user interactions on THE GRID having an impact, like the ability to knock over a trashcan or the inability to walk through walls or chairs,
Fig. 7.11 Photo of a sunrise on THE GRID. The lights in the space are turned down, THE GRID takes center stage.
users could always remain present while being distant, and design the thing directly. â&#x20AC;˘BEHAVIOR SETTING - the last major point was that, by using a model of the studio space, I was creating a model of behavior, as there were people in the audience who recognized the space and people who did not. There was a behavioral connection with the first group that made them feel responsible - and this responsibility thus could be
149
measured, and algorithmic behavior modeled in response.
Fig. 7.12 QR code for a time-lapse of the presentation being broken down.
Fig. 7.13 Photos of the rest of the presentation. There were thirteen other students, with projects and ideas ranging the spectrum of architectural practice and theory.
150
l o o k I n g b e y o n d the grId The final review was generally very positive and I was surprised by how readily people understood THE GRID once they took the step to inquire. However the review only presented a reduced version of THE GRID, because it was only what I could, as a student of architecture at Carnegie Mellon University, achieve with the means and funding available to me. The true GRID, the GRID when it is no longer a GRID, would need to go beyond the realm of possibility in three categories: the frame, the interface, and the process. MAXIMIZATION OF THE DIGITAL FRAME - One of the fundamental problems and inefficiencies with THE GRID that I worked with and presented was that it was ultimately a digital display composed of discrete pixels that could be counted and amounted roughly to a 1.3 megapixel image. An average photo has between 4 and 38 megapixels, and naturally a physical painting or drawing has practically infinite resolution, though one can argue the resolution depends on the thickness of the brush fibers or pencil tip. The magnitude of discrepancy - the average photo is more
151
than twice as dense resolution-wise - is shown with THE
Fig. 7.14 Pixels of THE GRID.
GRID when even a few feet away from the screen, which, from the interactive station took up about half of the view of the user, it becomes highly pixelated and breaks its illusion. A digital monitor would have the same problem since the user would constantly attempt to try to get close to the screen, or otherwise the screen would attempt to get close to the user - the more of the view THE GRID takes up, the more immersive it becomes. Pushing the resolution to a level such that the pixels never become discernable would be a challenge: the eye has about 576 megapixels of resolution with a small focus point where the eye sees about 7 megapixels. Very high resolutions are possible with CryEngine, as it is possible to tell the engine to render at over 26 megapixels. However, with more pixels comes more load on processing power, and the more needs to be rendered the stronger a graphics card is required for a real time view. EMERGING INTERFACES - A key direction THE GRID did not take was advanced interfaces - either motion tracking or tactile joysticks, both with a virtual reality headset. THE GRID as it was presented relied on a keyboard and mouse, both because they were the only interfaces available and because the typical design firm would be sure to have
Fig. 7.15 Section of a 7680 x 4320 output resolution shot from CryEngine.
152
them. The true GRID, however, would involve the user being unbound from a computer station and would connect the motion of their body with the motion of their user on THE GRID. That means a motion of an arm in real life would move their arm on THE GRID. A pulling motion would create form on THE GRID, a pointing motion would draw curves, and so on. While the body is being directly connected to the design control, the eyes are presented with a fully Fig. 7.16 Snapshot of a video Untold Games released of their in-development game Loading Human, modeled on Unreal Engine 4. This game is designed from the ground up to be used with a VR headset and motion joysticks. While the technology still limits the user to a seated position, it is not a great leap of thought to imagine it tracking fully body motion in a large open space, such as an aircraft hangar or an open field.
immersive 3D display within a virtual reality headset. This headset would have a high-resolution image over each eye, simulating binocular vision. If the system is haptic enough, the user would eliminate the technological middle man and experience the design with true embodiment - the body and the eyes would see the design directly as if it was real life. Such technology, like the very high resolition, is currently possible but not yet reachable. Headsets are in development by Oculus Rift and Sony that can seamlessly integrate with other interfaces, as long as the renderer can output two images a few inches apart. It is awareness of technology just like this that THE GRID as a thesis attempted to generate among rising architects.
153
RENDERING PRECEDES DESIGN - The last and far
reaching aspect of THE GRID was truly applying it to design. What happens when an architect and a client use THE GRID from beginning to end, becoming used to a photorealistic representation (is it representation at that point, if it is all they see?) and then experiencing the thing in real life? Does it become disappointing, because it is no longer perfect, or because it ages, or because habitation and cognitive offloading give meaning to it that could not be transferred on THE GRID? This would represent a fundamental shift in design process - the rendering would now precede design, as knowing what it looks like would be more important than knowing what it is. That means the design would always have to be seen before it is thought of, a process that is even more creative than that of dream, where the sight and the thought are simultaneous. With the need for an image,
Fig. 7.17 THE GRID would completely upset architect-client-contractor dynamics. Each sector would have their own idea, their own developed experience, of the reality represented by THE GRID.
or imagery in general, to exist first, the creative force would accelerate unimaginably. Combined with the previous categories, design would almost literally spring from the fingertips of the designer, fully and imprecicely resolved before it actually exists. The real would cease to exist. Time will only tell what will remain...or will time itself be gone? The future is bright, and the dawn comes. See you beyond THE GRID.
154
aPPenDIx souRCes BOOKS AND RESEARCH REPORTS Darley, Andrew. Visual Digital Culture: Surface Play and Spectacle in New Media Genres. London ; New York: Routledge, 2000. Dieter Hildebrandt, Jan Klimke, Benjamin Hagedorn, and Jürgen Döllner. 2011. Service-oriented interactive 3D visualization of massive 3D city models on thin clients. In Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications (COM.Geo ‘11). ACM, New York, NY, USA, , Article 6 , 1 pages. DOI=10.1145/1999320.1999326 http://doi.acm. org/10.1145/1999320.1999326 Emiliyan Petkov. 2010. One approach for creation of images and video for a multiview autostereoscopic 3D display. In Proceedings of the 11th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing on International Conference on Computer Systems and Technologies (CompSysTech ‘10), Boris Rachev and Angel Smrikarov (Eds.). ACM, New York, NY, USA, 317-322. DOI=10.1145/1839379.1839435 http://doi.acm.org/10.1145/1839379.1839435 Heiko Friedrich, Johannes Günther, Andreas Dietrich, Michael Scherbaum, Hans-Peter Seidel, and Philipp Slusallek. 2006. Exploring the use of ray tracing for future games. In Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames (Sandbox ‘06). ACM, New York, NY, USA, 41-50. DOI=10.1145/1183316.1183323 http://doi.acm.org/10.1145/1183316.1183323 Jongeun Cha, Mohamad Eid, and Abdulmotaleb El Saddik. 2009. Touchable 3D video system. ACM Trans. Multimedia Comput. Commun. Appl. 5, 4, Article 29 (November 2009), 25 pages. DOI=10.1145/1596990.1596993 http://doi.acm.org/10.1145/1596990.1596993 Lewis, Rick. Generating Three-dimensional Building Models From Two-dimensional Architectural Plans. Berkeley, Calif.: University of California, Berkeley, Computer Science Division, 1996.
155
Luc Leblanc, Jocelyn Houle, and Pierre Poulin. 2011. Component-based modeling of complete buildings. In Proceedings of Graphics Interface 2011 (GI ‘11). Canadian Human-Computer Communications Society, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, 87-94. Mitrovic, Branko. Visuality for Architects: Architectural Creativity and Modern Theories of Perception and Imagination. University of Virginia Press, 2013. Rhyne, Theresa-Marie. “Computer Games and Scientific Visualization.” Association for Computing Machinery. Communications of the ACM 45.7 (2002): 40-4. ProQuest. Web. 24 Sep. 2013. Robina E. Hetherington and John P. Scott. 2004. Adding a fourth dimension to three dimensional virtual spaces. In Proceedings of the ninth international conference on 3D Web technology (Web3D ‘04). ACM, New York, NY, USA, 163-172. DOI=10.1145/985040.985064 http://doi.acm.org/10.1145/985040.985064 YOUTUBE AND VIMEO alvaroignc. (2010, March 17). Zumthor’s Thermae of Stone in Source SDK part 5: Props. [Video File]. Retrieved from http://www. youtube.com/watch?v=hh4nGEAKm4s - Zumthor’s Therme Vals rendered in Source. Archimmersion. (2010, June 25). UDK - Family House in Realtime 3D [Video File]. Retrieved from http://www.youtube.com/ watch?v=AV802r_Pr0k&feature=youtu.be - More UDK - again, note the cheap quality. Autodesk. (2011, April 12). Autodesk Showcase 2012 for Architectural, Construction, and Engineering Users - YouTube [Video File]. Retrieved from http://www.youtube.com/watch?v=ioP0CVRJvUI#t=17 - This is for reference - this is a very bad implementation of the subject of my thesis as it provides no presence, no true interactivity and is not at all designed for the user.
bigkif. (2007, November 17). Ivan Sutherland : Sketchpad Demo (1/2) [Video file]. Retrieved from http://www.youtube.com/ watch?v=USyoT_Ha_bA bigkif. (2007, November 17). Ivan Sutherland : Sketchpad Demo (2/2) [Video file]. Retrieved from http://www.youtube.com/
156
watch?v=BKM3CmRqK2o - Ivan Sutherland’s 1963 Sketchpad thesis, archival footage. EliteGamer. (2012, November 28). Luminous Engine - Live Edit Tech Demo “Agni’s Philosophy” [Video file]. Retrieved from http:// www.youtube.com/watch?v=eHSGBh1z474 -Luminous Engine tech demo. GameNewsOfficial. (2013, March 29). Metal Gear Solid 5 Fox Engine Tech Demo [Video file]. Retrieved from http://www.youtube. com/watch?v=_18nXt_WMF4 -Fox Engine tech demo. gametrailers. (2012, June 7). Unreal Engine 4 - GT.TV Exclusive Development Walkthrough [Video file]. Retrieved from http://www. youtube.com/watch?v=MOvfn1p92_8 -Unreal 4 tech demo. Hammack, David. [hammack710]. (2013 January 3). Unity 3D Simulation Project [Video File]. Retrieved from https://www.youtube. com/watch?v=EEA5_he3pRk - A demo of Unity3D , looks very cheap and old. HD, RajmanGaming. (2013, August 21). CryEngine Next Gen (PS4/Xbox One) Tech Demo [1080p] TRUE-HD QUALITY [Video file]. Retrieved from http://www.youtube.com/watch?v=4qGK5lUyCwI -CryEngine demo reel. Inc, Marketing Department Ideate. (2013, February 26). Autodesk Showcase 3D Visualization Software [Video file]. Retrieved from http://www.youtube.com/watch?v=IvBL2kX6CME -Autodesk Showcase video. lxiguis. (2012, August 28). Real time Architectural Visualization - After Image Studios [Video File]. Retrieved from http://www. youtube.com/watch?v=HPtQyBDpatg&feature=youtu.be - UDK demonstration. It is not that great and a little old, but is a capable engine. Lapere, Samuel. [SuperGastrocnemius]. (2012, April 6). Real-time photorealistic GPU path tracing: Streets of Asia [Video File]. Retrieved from http://www.youtube.com/watch?v=gZlCWLbwC-0 Lapere, Samuel. [SuperGastrocnemius]. (2013, August 13). Real-time path tracing: 4968 dancing dudes on Stanford bunny [Video
157
File]. Retrieved from http://www.youtube.com/watch?v=huvbQuQnlq8
Lapere, Samuel. [SuperGastrocnemius]. (2012, May 29). Real-time photorealistic GPU path tracing at 720p: street scene [Video File]. Retrieved from http://www.youtube.com/watch?v=evfXAUm8D6k -GPU path trace method demonstrations. This is a highly realistic rendering
method, short of the grainy appearance.
Lumion3D. (2010, November 1). Architectural visualization: Lumion 3D software is easy to use [Video file]. Retrieved from http:// www.youtube.com/watch?v=uoLV8QIm02M -Demonstration of Lumion 3D. Naing, Yan. [MegaMedia9]. (2013, May 31). Realtime 3D Architectural Visualization With Game Engines [Video file]. Retrieved from http://www.youtube.com/watch?v=uXzy3V3N2uw -CryEngine3 demonstration in a sandbox environment. Skiz076. (2012, January 3). FallingWater in Realtime 3d (UDK) [Video File]. Retrieved from http://www.youtube.com/ watch?v=QdF4rvw64rg - A model of Fallingwater in UDK. spacexchannel. (2013, September 5). The Future of Design [Video File]. Retrieved from http://www.youtube.com/watch?v=xNqs_SzEBY#t=134 - Video showcasing tactile hardware interaction. This is the future, but we are not then yet. Storus, Matt. (2011, February 9). Video Game Engine Architectural Visualization Test [Video File]. Retrieved from http://vimeo. com/19774547 -Another CryEngine3 demonstration. T.V., Arocena. [arocenaTM]. (2011, February 17). Presenting Architecture through Video Game Engine [Video File]. Retrieved from http://www.youtube.com/watch?v=S8HUj85Cq1s - Demo by Max Arocena with CryEngine showing interactive lighting. Timeshroom. (2013, July 30). Architectural Visualisation - Oculus Rift Demo [Video file]. Retrieved from http://www.youtube.com/ watch?v=gaFZH8Z70vk -Oculus RIFT demo showing the views provided by the headset. Note how they are slightly offset, this would produce the illusion of 3D. Visual, Real. [RealVisual3D]. (2012, October 23). iPad 4th Generation: Unity 3d Realtime Architectural Visualisation [Video file]. Retrieved from http://www.youtube.com/watch?v=n6eb4KB2k2U -iPad demonstration of Unity3D and how it is cross platform.
158
ARTICLES (2014, May 8). Answering the Unanswerable: What is the Resolution of the Human Eye?. Peta Pixel. Retrieved from http:// petapixel.com/2014/03/12/answering-unanswerable-whats-resolution-human-eye/ -Discussion on the resolution of a human eye. (2013, August 20). Arch Virtual releases architectural visualization application built with Unity3D game engine, including Oculus Rift compatibility. Arch Virtual. Retrieved from http://archvirtual.com/2013/08/20/arch-virtual-releases-architectural-visualizationapplication-built-with-unity3d-game-engine-including-oculus-rift-compatibility/ -Arch Virtualâ&#x20AC;&#x2122;s interactive app. (2013, August 20). Arch Virtual. Retrieved from http://www.archvirtual.com/Panoptic/2013-08-19-arch-virtual-panoptic.html
Premade realtime visualization demo by Arch Virtual. It is interactive within a web browser. This is a very good example of the subject of my thesis.
-
(2013, June 3). Arch Virtual. Retrieved from http://archvirtual.com/2013/06/03/tutorial-ebook-now-available-unity3d-andarchitectural-visualization-1-week-preview-edition-discount/ - Arch Virtualâ&#x20AC;&#x2122;s ebooklet on architectural visualization in Unity3D. (2014, May 8). Crysis 8K resolution hack offers a peek at the next decade of gaming. Engadget. Retrieved from http://www. engadget.com/2014/05/06/crysis-8k-resolution-hack/ -Article about the CryEngine modification that allows it to render large resolution images. Elkins, James. (2010, November 6). How Long Does it Take To Look at a Painting? Huffpost Arts & Culture. Retrieved from http:// www.huffingtonpost.com/james-elkins/how-long-does-it-take-to-_b_779946.html looking at it.
- Article showing how Mona Lisa visitors spend 15 seconds
(2014, May 8). How many megapixels do you need?. Connect. Retrieved from http://connect.dpreview.com/post/1313669123/ how-many-megapixels -Discussion of the photo resolutions of various phones. Hudson-Smith, Andrew. digital urban. Retrieved September 2, 2013, from http://www.digitalurban.org/ (deprecated page: http:// www.digitalurban.blogspot.com/) - Blogging platform that publishes research about connecting digital modeling and the real world with an emphasis on
the profession of architecture.
159
(2014, May 8). Hyperrealistic virtual reality adventure Loading Human headed to Oculus Rift and Project Morpheus. Engadget.
Retrieved from http://www.engadget.com/2014/05/07/loading-human-rift-morpheus/?utm_campaign=socialflow&utm_
source=fb&utm_medium=fb -Article that mentions the game developed specifically for a VR headset, with footage. Jobson, Christopher. (2013 September 22). Full Turn: 3D Light Sculptures Created from Rotating Flat Screen Monitors at High Speed. Colossal. Retrieved from http://www.thisiscolossal.com/2013/09/full-turn-light-sculpture/?src=footer projection - this is useful because hardware exploration is part of my thesis, though here the technology is very artsy.
- A project using alternate
Kasperg. “Kaufmann House.” The Whole Half-Life. 1/23/2006, Retrieved September 2, 2013, from http://twhl.info/vault. php?map=3657 -Website of the Fallingwater digital recreation. This establishes a kind of benchmark for the possibilities of the area. Putt, K. Crysis 3 - Alternative. 2014. https://secure.flickr.com/photos/k_putt/13974809675/in/set-72157644191959442/. -Flickr image
of CryEngine outputting to a very high resolution.
Russo, Luigi. Architectural Visualization. Unreal Engine. Retrieved September 3, 2013, from http://www.unrealengine.com/ showcase/visualization/architectural_visualization_1/ - Website of a project done in UDK. This is in place to be licensed (educational use included). simulation. (n.d.) Random House Kernerman Webster’s College Dictionary. (2010). Retrieved October 20, 2013, from http://www. thefreedictionary.com/simulation - Definition of simulation. Varney, Allen. “London in Oblivion.” The Escapist. 7/8/2007, Retrieved September 2, 2013, from http://www.escapistmagazine. com/articles/view/issues/issue_109/1331-London-in-Oblivion
- Article that mentions several attempts to visualize architectural work in video game engines. This could be a good springboard on collating past efforts in this area.
Vella, Matt. (2007, December 21). Unreal Architecture. Bloomberg Businessweek. Retrieved from http://www.businessweek.com/ stories/2007-12-21/unreal-architecturebusinessweek-business-news-stock-market-and-financial-advice
architectural purposes.
- Article detailing the use of UDK for
Wikipedia contributors, “Architectural Animation,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/wiki/Architectural_ animation (accessed November 29, 2013). - Wikipedia article on architectural animation. (2014, May 8). What is the resolution of the human eye?. Science Blogs.
Retrieved from http://scienceblogs.com/
cognitivedaily/2006/10/22/what-is-the-resolution-of-the/ -Another discussion on the resolution of a human eye.
160
IMAGES Act-3D. 19 April 2012. Lumion logo. [logo]. Retrieved from http://lumion3d.com/forum/general-discussion/lumion-logo/?action=dl attach;attach=8515 - Lumion logo. alexglass. 11 October 2013. Ray Tracing vs Rasterized. [chart]. Retrieved from http://www.ign.com/boards/threads/generation-8starts-with-brigade-not-x1-ps4.453427233/ - Chart of raster vs. ray tracing technologies. Blender Foundation, The. n. d. Blender logo. [logo]. Retrieved from http://download.blender.org/institute/logos/blender-plain. png - Blender logo. Chaos group. n. d. Vray logo. [logo]. Retrieved from http://upload.wikimedia.org/wikipedia/fa/a/a1/Vray_logo.png - Vray logo. CryEngine. n. d. CryEngine logo. [logo]. Retrieved from http://www.n3rdabl3.co.uk/wp-content/uploads/2013/08/logo_vertical_ black.jpg
-CryEngine logo.
Epic Games. n. d. UDK logo. [logo]. Retrieved from http://epicgames.com/files/technologies/udk-logo.png - UDK logo. Euclideon. 22 November 2011. Euclideon Unlimited Detail. [screenshot]. Retrieved from http://media1.gameinformer.com/ imagefeed/featured/gameinformer/infdetail/infpower610.jpg - Euclideon screenshot. Fatahalian, Kayvon. n. d. Kayvon Fatahalian. [graph]. Retrieved from http://www.cs.cmu.edu/~kayvonf/ - Photo of Kayvon Fatahalian. Fatahalian, Kayvon, et al. July 2013. Visualization graph. [graph]. Retrieved from http://graphics.cs.cmu.edu/projects/ exhaustivecloth/ - Kayvonâ&#x20AC;&#x2122;s exhaustive graph. History Blog, The. n. d. Dome design. [drawing]. Retrieved from http://www.thehistoryblog.com/wp-content/uploads/2013/01/ Dome-design.jpg - Brunelleschiâ&#x20AC;&#x2122;s dome image.
161
IGX Pro.com. n.d. Mario 64. [screenshot]. Retrieved from http://www.igxpro.com/wp-content/uploads/2012/09/mario64.jpg
-Mario64, an old 3D video game.
Jean-Philippe Grimaldi, et al. n. d. LuxRender logo. [logo]. Retrieved from http://upload.wikimedia.org/wikipedia/commons/f/f5/ Luxrender_logo_128px.png - LuxRender logo. Konami. 27 March 2013. Title. [logo]. Retrieved from http://babysoftmurderhands.com/wp-content/uploads/2013/04/FOX-EngineKojima-Productions-GDC-2.jpg - Comparison of the Fox Engine to real life. Mh. 10 March 2010. The Gates-Hillman Complex. [photo]. Retrieved from http://upload.wikimedia.org/wikipedia/commons/a/a6/ CMU_Gates_Hillman_Complex.jpg - Photo of the Gates-Hillman Center. n. d. Tom Cortina. [photo]. Retrieved from http://sigcse2014.sigcse.org/authors/ - Photo of Thomas Cortina. Otoy, Inc. 22 November 2012. Octane Render logo. [logo]. Retrieved from http://en.wikipedia.org/wiki/File:Octane_Render_logo. png - Octane logo. PcGamesHardware. n.d. Crysis 2 screenshot 5. [screenshot]. Retrieved from http://www.pcgameshardware.com/screenshots/ original/2010/03/crysis-2-screenshots-gdc-2010__5_.jpg -Crysis 2 image. Persage. 5 April 2007. Carnegie Mellon University College of Fine Arts building. [photo]. Retrieved from http://upload.wikimedia. org/wikipedia/commons/3/3a/CFA.JPG - Photo of the College of Fine Arts.
logo.
Unity Technologies. n. d. Unity logo. [logo]. Retrieved from http://upload.wikimedia.org/wikipedia/ru/a/a3/Unity_Logo.png - Unity MISCELLANEOUS Adobe & Touch. n. d. Projects Mighty & Napoleon. Retrieved from http://xd.adobe.com/mighty/notify.html - Website of Adobe Mighty
and Napoleon.
Autodesk. n. d. 3D visualization software brings design to life. Retrieved from http://www.autodesk.com/products/showcase/
162
overview
- Website of Autodesk showcase.
Crydev. (2013, October 18). CRYENGINE速 Free SDK (3.5.4) [Computer software]. Retrieved from http://www.crydev.net/dm_eds/ download_detail.php?id=4 - CryEngine3 SDK. Lumion. (2013). Lumion 3D () [Computer software]. Retrieved from http://lumion3d.com/ - Lumion website, note how a new version is
available, but the evaluation version of it is not yet.
NHTV University of Applied Sciences. (2013, November 11). ARAUNA2 demo. [Computer software]. Retrieved from http://ompf2. com/viewtopic.php?f=5&t=1887#p4233 - Arauna2 demo. Schroeder, Scott A. (2011, January 1). Adopting Game Technology for Architectural Visualization. Purdue e-Pubs. Retrieved from http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1005&context=cgttheses - Possible precedent thesis.
163
TeRms 3D - A digital representation of three point perspective that approaches how the eyes interpret light. 3D is often mapped onto a planar screen, but newer technologies are using curved screens, or even a screen for each eye to even more closely replicate vision. ACM - Association for Computing Machinery. Animation - In the context of this thesis, refers to a disembodied flythrough that architects are so fond of - an unnatural movement that lacks artistic merit and generally does not approach human experience. Animations can and do exist that give the viewer an experience they can understand, but technology can go beyond that. AO - Ambient occlusion. A technique that replicates GI shading by determining where deep corners are and shading them accordingly. Combined with other effects, this is an efficient method to fake radiosity shading. Architecture - The study of the memory of time and space. Encompasses the thought, theory, tools, design, construction, evaluation, and history of buildings. Baking - Taking pre-computed data and turning it into a texture that can be applied in a material. BIM - Building Information Modeling. A type of modeling not necessarily visual that digitally covers architectural systems. Bump map - A bump map is either another name for a normal map or refers to a greyscale image that appears like the grain or small-scale detail of a material that is applied to a material in the scene to very efficiently fake said detail. A bump map is the simplest way to add complexity to a mesh on a small scale by only using a material. CAD - Computer Aided Design. Digital precision tools used in product, aviation, automotive, and architectural design. Compute capability - A ranking of CUDA technology, roughly the version number, that relates to how well the CUDA cores can
164
process their tasks. CUDA - A technology Nvidia developed for their graphics processors that uses parallel processing that developers can directly access for graphics purposes. CMU - Carnegie Mellon University. This is my university and where the School of Architecture is - where I am having my thesis. Delay load - a term I came up with that describes the relative time it would take to use one program or pipeline compared to another. For the purposes of the software evaluations, I compared a regular pipeline of modeling in Rhinoceros and rendering in Vray to each set of alternative software. DIY - Do it Yourself. A field of development not necessarily informed by professional practice where users attempt to find their own ways to achieve a task. These attempts are not always successful but the culture is one of sharing - the attempts that work are often documented and refined. Drivers - Software middle-men between hardware on a computer and other software that aims to use that hardware. Engine - A graphics software (that can also be embedded in other software) that is used to render virtual worlds. In video games, this is what makes the graphics work, though it is often also responsible for physics calculation, the menus and UI, and AI. Environment map - A single snapshot of the six cardinal directions around an object with a FOV of 90째 that are then composited to get a 360째 view completely around an object. This is used to fake reflections. Doing this in real time is very taxing on performance. FOV - Field of view. The geometric angle that is subsumed by the view cone of a viewer. Fps - Frames per second, also frame rate. A measure of the amount of frames a graphics processor can generate on a monitor every second to simulate fluid motion. Values between 30 and 60 are good goals for graphics-heavy software, as at lower values choppiness and stuttering become apparent and higher values may produce incompatibility with the monitor hardware (usually not
165
an issue with modern software). This can be measured as an average over the last few seconds or as a value every few seconds.
Gameplay - The actions a user performs in relation to the environment or other players within a game. People often fail to make the distinction between graphics and gameplay, as one or the other may define a video game more than the other. For this thesis, I am ignoring almost all aspects of gameplay except those involving interaction, walking, and other movement controls. GI - Global Illumination. This refers to an even distribution of light in a scene such that more exposed surfaces get more light and less exposed surfaces get less light. This ends up making corners darker and smoothly shading other geometry. This is useful as a step in generating realistic shadows. GPU - Graphics Processing Unit. The piece of hardware in a computer largely responsible for computing what is seen on a monitor. Over the years the GPU has grown in importance, not only for video games but for design number crunching as well. GWAP - Games With A Purpose. Video games designed or heavily repurposed for the aim of training real jobs. These video games are high fidelity and take into account nearly all aspects of a real world scenario. They often focus less on graphics, however. Mesh - A set of connected or related triangles in 3D space that combine to make a virtual shape or surface. The triangles are solid, however their appearance can change when an image, or a texture, is applied to the mesh via predefined operations, a material, using coordinates assigned to each point of the triangles. Meshes can have billions of triangles. Normals and normal map - A normal is the perpendicular direction from a plane; in meshes the planes are the triangles. A normal map is a purple and green image that replicates height data, which is projected along the normals of the mesh. This fake height data appears as ridges or other shapes, depending on the map, that receive lighting and shading but are only a visual effect on the geometry - it is clipped by the visible edges. A technique called parallax mapping or displacement mapping works around the clipping, appearing to make physical geometry on top of the original mesh. NURBS - Non Uniform Rational Basis Spline. A mathematical method for defining a curve that can also be used to define complex surfaces. Since the definition is mathematical, the surfaces are exact, though a given graphics program approximates the surface with a mesh for preview purposes. The mesh simply takes a small number of points on the surface and connects them, but the mesh is no longer the NURBS surface, it is just a very near approximation. Many methods exist to sample those points.
166
Path tracing - A method of ray tracing that determines where the photons that comprise a pixel most likely came from, taking into account all the light in a scene. Over enough samples, path tracing should generate an image indistinguishable from reality. Photo-realistic - A digital simulation that is visually very near or indistinguishable from a photo taken by a real camera. Pre-computed - Computed before hand, usually a process that takes many minutes or hours, but the results are reusable. Raster - Very general term for taking a mathematically perfect form and simplifying it for viewing. Raster can refer to precomputing shadows in a scene and baking them into the materials in the scene instead of having the shadows be dynamic. Ray tracing - A graphical technique where photons from lights are traced around a scene, taking into account all possible material properties, to determine how that scene is lit. Real time - A digital refresh, or frame, rate at which point the screen looks fluid, like a movie. Reality is in real time. Render - A technique where a graphical algorithm is applied to a scene that generates how that scene would look, usually, in real life. It is also a general term for creating a high quality image, so many realistic paintings could be understood as renders. Rhinoceros - NURBS modeling software developed by McNeel and Associates that is primarily used for nautical, product, and architectural design. It is fairly streamlined and includes hundreds of functions. Supports scripting and plug-ins. Scene - A set of geometry, lights, materials, effects, and other features that combine to be used for rendering or interaction. Design software either imports files to combine into a scene, or saves the scene as a file which references other files. Shader - A rapid computational process where visual effects like refraction, bumpy sufraces, and reflection are processed as materials that can be applied to geometry. Shaders are much cheaper than brute force methods but rely on environment maps and fairly complex material definitions to replicate how these effects appear in real life. Depending on the software, they allow behavior that would otherwise be difficult to replicate, for example a material can fade depending on how close the viewer is to it.
167
Shadow map - Precomputed shadows that are applied to all geometry. Shadow maps are stored as color image files, depending
on the lights in the scene, that are then used (usually automatically) in the material shaders of the scene geometry; this is called baking. Just like with other textures, they use object UV coordinates. SIGGRAPH - Special Interest Group on Graphics. An annual conference held by ACM that reviews and publishes research on computer graphics. Simplified lighting - The use of simple models of how light propagates in space. This ranges from a linear hotspot/falloff model, with 100% light in a small sphere of an arbitrary radius and 0% light in a larger sphere, and a linear gradient in between, to more complex models where certain shapes are achieved on surfaces that mimic how real lenses distribute light. Ultimatype - Direct opposite of prototype - what the object or space will eventually be. User interaction - The concept of a person using controls on a device to change how that device operates, often this feedback is displayed on a monitor or screen. Vector - A mathematically defined curve. Vector graphics have infinite resolution, but cannot exist in real life, so they have to be turned into a raster image. Likewise, digital photons are also vectors, but they have to be turned into bright spots and dark spots on surfaces for a user to understand them. Vertex lighting - An alternate method of generating shadows in a scene. Vertex lighting applies a color value to each vertex of a geometry that corresponds to the color of the shadow or the light at that spot. Geometry is sometimes subdivided for this purpose to have a more even distribution of points. The advantage this has over regular shadow maps is that it is not pixel based and will always have smooth shadows, but at the potential cost of detail. Vray - Rendering suite developed by Asgvis. Features a fast dual-renderer pipeline that incorporates material definitions, lights, a sun and sky, caustics, and has support for crude animation. Exists as a plugin for Rhinoceros and other modeling programs. WYSIWYG - What You See Is What You Get. A design concept where the visual development of something is exactly what that thing would look like once it is finished. Microsoft Word is a good example of a WYSIWYG program.
168
This page intentionally left blank.