Games as Design Environments

Page 1

Gamesas Des i gn E n v i r o n me n t s May201 1 M. Des ST ec hnol ogyThes i s BenRegni er


Games as Design Environments By Benjamin L. Regnier Bachelor of Arts in Architecture, Rice University, 2004 Bachelor of Architecture, Rice University, 2006 Submitted in partial fulfillment of the requirements for the degree of Master in Design Studies Technology Concentration At the Harvard University Graduate School of Design May 2011 Copyright © 2011 by Benjamin L. Regnier The author hereby grants Harvard University permission to reproduce and distribute copies of this thesis document, in whole or in part for educational purposes.

Signature of the Author…………………………………………………………………………………………..................... Benjamin L. Regnier Harvard University Graduate School of Design

Certified by……………………………………………………………………………………………………………...................... Panagiotis Michalatos Lecturer in Architecture Thesis Advisor

Accepted by…………………………………………...

……………………………………………………….

Martin Bechthold

Sanford Kwinter

Master in Design Studies, Co-Chair

Master in Design Studies, Co-Chair

Professor of Architectural Technology

Professor of Architectural Theory and Criticism


1. Table of Contents 1. Introduction 1. Future Context 2. Visualization: Offloading Cognition 3. Gaming Our Way to a Solution 4. Methodology 2. Background 1. Data Visualization 2. Game Interfaces: Implicit Rules, Heuristics and “Loose Optimization” 3. Case Studies 1. History flow – Wikipedia 2. FoldIt 3. Utile 4. Sawapan / Adams Kara Taylor

1


4. Research Projects 1. Treemapping Program 2. Springlinkage 3. Tensor Patterns 5. Image Credits 6. Bibliography 7. Appendices A. Streamline Algorithm B. Region Generation

2


1. Introduction 1.1 Future Context In the last decade there has been an explosive growth in the use of parametric and computational methods in the AEC industry. These methods range from simple 3D modeling for visualization and documentation to complex, multi-user parametric models that involve multiple parties and multiple phases of the design and construction process. It is likely that in the next decade that parametric and analytical strategies like building information modeling (BIM), computer energy modeling, finite element frame analysis, and automated quantity takeoffs will become nearly ubiquitous in any medium to large scale project. Furthermore, current BIM goals such as post-occupancy analysis and digital permit and construction packages will likely be in widespread use. Parallel with the increasingly sophisticated nature of architectural digital models is an growing number of designers using digital modeling techniques early in the design process. Research conducted by Panagiotis Parthenios at the GSD in 2005 shows that the use of sophisticated digital techniques in schematic design is strongly correlated with age. Younger

3


architects are using a wider array of digital tools for schematic design, and are using them more frequently. (Parthenios 2005)

Fig. 1.1: Survey by Panagiotis Parthenios on preferred schematic design tools, 2005 GSD Doctoral Thesis

It is surprising, therefore, to realize that the human-digital interface, the very environment that will define the experience of future design, has changed little in the last thirty years. Just as computer-aided drafting borrowed many of its conventions and visual cues from the world of hand drafting, interfaces for complex parametric and BIM software packages are inheriting interface and visualization methods directly from CAD programs. As a result of this interface inheritance, digital tools have a literal, geometric approach to design that does not serve well for exploratory schematic design. Current parametric design environments, used as schematic design tools, are overburdened, unintuitive, isolating, and ultimately counterproductive to the open-ended, collaborative, exploratory nature of design. While user interfaces in web, device, and game environments have seen continuous innovation over the last few decades, designers have been left without tools to query and respond to the enormous 4


amount of native and contextual data inherent in parametric models, leaving the vast majority of this information ignored or oversimplified, or easy methods to work collaboratively and simultaneously. (Sutherland 1963)

Fig.1.2: These two views of the same project, while very similar, indicate completely different environmental data (wind and solar incidence).

Parametric models differ from drafted drawings in that they contain an immense amount of hidden information. A CAD interface could effectively ape hand drafting because, in both systems, the primary information in the drawing is the geometry itself. Additional data, such as layers, could be communicated through simple color-coding. In a parametric model, however, much of the data (and the work itself) involves relationships between objects themselves, or parameters that have no geometric context (cost, strength, ownership, phase). In addition, the amount of information available to designers and engineers is greater than ever before, due to the mapping of contextual data (GIS) or the result of computational analysis. Without an easy way to visualize, query, and respond to this information, the vast majority of it is ignored or oversimplified. As a result it is very difficult for designers to successfully examine the immense quantitative data that is available as part of schematic design, relegating it to optimization or modification of an existing idea. Digital design processes can also be anti-collaborative, leading to poor communication and duplication of work. 5


When designers do use computational methods early in a design, the process is so allconsuming that the computational process quickly becomes the meaning of the design itself (this is commonly known as “building the diagram�.) If computational methods are to become ubiquitous, their contribution to the meaning of design is going to necessarily become part of the background process. Computation in design has a future as a context, not as a style. 1.2 Visualization: Offloading Cognition Re-imagining architectural interfaces cannot be done without examining the base purpose of the visualization itself. There is a significant amount of time being spent working on new input devices and GUIs, without parallel (or ideally, prior) research going into the content that is seen on a screen. As parametric data is multidimensional, and at least partially nongeometric, the idea of the project as a drawing or model must be replaced with a more amorphous idea of data organization (database, graph, etc), and thus what is on the screen becomes a data visualization. The following definition on data visualization comes an article by visual researcher Tamara Munzner (emphasis mine): "Visualization is used when the goal is to augment human capabilities in situations where the problem is not sufficiently well defined for a computer to handle algorithmically... [by allowing] people to offload cognition to the perceptual system, using carefully designed images as a form of external memory." (Shirley 2009) This definition recasts the role of the computer as a form of augmentation, not as a

6


repository or tool. Visualization allows users to accomplish tasks that would be impossible to achieve through purely manual or purely computational methods, using a process of synthesis and feedback. User interface designers have referred to this relationship as a "cognitive coprocessor."(Shirley 2009) If design is examined as a process of problem solving, than the problem domain is almost always going to be ill-defined, unpredictable, and discontinuous. So, even from a computational mindset, a lot of the work in solving this "problem" is best done non-algorithmically - in other words, intuitively. The computer-designer relationship implicit in the visualization process is ideal for combining intuition with algorithmic optimization, for reconciling the quantitative and the qualitative. A well-designed visualization and interaction process can recast computation in design from a rigid, deterministic role to one that is freeform and open-ended, providing solutions that would be impossible to do manually or automatically.

1.3: Gaming our way to a solution A visualization environment is by definition providing guidance to reach a goal, however unfocused or poorly defined that goal may be. The exploration of possible solutions or paths to that goal, with accurate and well-defined feedback to help in the exploration, is more similar to a puzzle or game environment than it is to drafting. There are many competing definitions for games, but many of them can be distilled to a "form of play with goals and structure." The difference between a design environment and a game environment is that the goals (and perhaps structure) in design are subjective, open, and loosely defined. This is a difference of degree, not type -- as will be shown later, games can be as subjective and open as the design 7


process. The third component of the definition - play – is vital, for it provides not only the impetus to further exploration but also the implied looseness of organization that is critical to a design process. Re-casting the design interface as a game also helps to leverage the skills and mindset that already exist within our culture. In a presentation at the 2010 TED conference Jane McGonigal has pointed out that, according to recent research at Carnegie Mellon, "the average young person today in a country with a strong gamer culture will have spent 10,000 hours playing online games," incidentally the same amount of time they will have spent in school between fifth grade and high school graduation. (McGonigal 2010) This "parallel track of education" is attuning the next generation to pattern-finding within a certain interface language, a language that over time has become increasingly sophisticated. As the requirements of game and design interfaces are superficially similar, it make sense to leverage this learned ability to make design interfaces more seamless and natural.

1.4: Methodology This paper will explore the possibilities inherent in data visualization and game-like interfaces through a process of examination and replication. After a brief introduction to the background and relevant concepts in visualization and interface design, a series of case studies will be introduced to examine how these concepts are already being applied to design problems in various fields, such as genetics, structural engineering, and planning and development. These case studies will also be used to enumerate relevant visualization and

8


interaction techniques that are rarely seen in architectural software. After an analysis of the case studies, this paper will present three novel software projects that explore visualization, interface, and the synthesis of the two, to explore possibilities in computational design environments.

9


2. Background 2.1 Data Visualization Data visualization is an inherently multidisciplinary field that combines design with a solid basis in the science of perceptual cognition. The goal of visualization research is to improve the quality and bandwidth of the connections between a user and a computational system by leveraging our understanding of human cognition. As the amount of cognitive capacity provided for the visual system dwarfs that of the other sensory systems, the majority of perceptual research is based upon sight. Visualization research topics commonly touch on cognitive topics such as attention, memory, and consciousness, as well as the bio-mechanics of the visual system such as edge, color, motion, and depth perception. (Ware 2008) This research has developed a detailed understanding of the advantages and limitations of different methods of showing information. An understanding of the relative precision of different channels of information within the visual system – position, orientation, color, etc – has allowed for the objective analysis of the effectiveness of a visualization method. Other research on visual continuity and memory has given insight into interface and presentation designs that allow a user or audience member to understand the relationship between a 10


detailed subset of data and the whole. In addition to understanding perception, visualization research and practice has developed a methodology for the production of new visualization environments that are attuned to the specific needs and abilities of a user group. In the overall data visualization process, in practice, the majority of time is spent investigating the data and requirements of the user, to help determine the methods best suited for the created tool. Of primary consideration is the sophistication of the user, and whether the tool will be used to explore or present data. More powerful and refined methods for visualization often come at the cost of immediate comprehension; thus an entirely different visualization method may be used if presenting to a group of people unacquainted with the data, versus a team of users exploring the data for patterns and structures. (Shneiderman 2009, Buxton 2007) Data visualization is a particularly useful paradigm for schematic design due to the definition given above – the process of offloading cognition. Visualization researchers consider sketching, particularly in a design process, to be an innate act of fluid visualization in itself. In a study of the creative design process, Masaki Suwa from the Hitachi Research Laboratory in Japan and Barbara Tversky of Stanford university asked architecture students to sketch out designs for an art museum. They were videotaped in the sketching process and later were asked to watch the tapes and comment on what they had been thinking. This process revealed that designers often start with a very loose sketch, and then interpret what they had already drawn. The process of sketching was in itself constructive, not merely a recording of a complete thought process. Lines in a sketch could change meaning over time as the design developed, as the feedback loop between the constructive visualization and the designer completed itself 11


repeatedly. Suwa and Tversky called this process “constructive perception” and suggested that it is fundamental to the design process. (Ware 2008)

2.2 Game Interfaces: Implicit Rules, Heuristics and “Loose Optimization” There are many different, sometimes conflicting, definitions of what constitutes a game. The term is so fluid that it is used as a central example in Wittgenstein's Philosophical Investigations of the limitations of communication: “What does it mean to know what a game is... Isn’t my knowledge, my concept of a game, completely expressed in the explanations that I could give?” This choice of an ambiguous descriptor is intentional, as the multivalent, often conflicting demands of the design process demand a flexible and amorphous model as a basis for describing the process. To describe a design environment as game-like is also suggestive of the ways that it is different from the current conception of a CAD/CAM environment- the existence of rules, of conflict, and of a playfulness. This last quality brings games closer to the process of sketching – as indeed there are many games based upon the simple act of drawing. Well-designed interactive visualizations universally exhibit a few common traits. They show detail in data while maintaining knowledge of the relationship to the whole; they convey multidimensional information clearly and flexibly; and they have a relatively fluid method of interaction that allows the user to remember the steps taken to get to a certain visual point. The end result of these traits is an environment where one is conscious of the the form of the data explored, and of their relationship to this form. When the interaction is sophisticated

12


enough, the process of exploring the data and finding patterns and relationships becomes game-like, a playful and enjoyable process. (Steele et al. 2010) The human perceptual system is so good at finding patterns and relationships in visual information that at times the reverse can happen as well – games can be analyzed and optimized as data environments in themselves. On particularly famous example is the twentyyear process by which a group of competitive Pac-Man players reverse-engineered the artificial intelligence algorithms and game design, simply through repeated game-play, leading to the first “perfect score� in 1999. Their well-documented deconstruction of the logic within the game, proven true by a later release of source code, is a testament to how a compelling interaction environment with rewards and goals, and a community of users, can lead to a remarkable exploration of the limitations and possibilities inherent in a provided system. (Pittman) Fig. 2.1: Diagrams produced by game players to explain the AI algorithms in Pac-Man,

primarily determined through repeated game play.

13


2. Case Studies 3.1: History flow – Wikipedia History Flow is a tool written by Fernanda Viegas and Martin Wattenberg at IBM to explore the way that groups of collaborative authors edit documents – in this case, the individual “wiki” pages of Wikipedia. The impetus behind the project was gaining insight into the nature of this collaboration – how long content stayed in the document, the number of contributors, and how the community dealt with vandalism on an open forum. There was also a desire to reveal the “agitated reality of constant communal editing” that underlies a seemingly static Wikipedia web page. The tool represents a document as a bar with widths of different colors representing the contributions of different editors, in the same order and scale as the actual text. A series of these bars with connecting volumes is used to show how content is added, removed, or relocated between revisions. The user interface further allows the highlighting of a specific user's contributions, or a method to code contributions by their age, showing which part of the document have existed the longest. A detail view also allows for an in-depth browsing of content change over time. The history flow method not only revealed immediately the points at which there had 14


been vandalism or “edit wars,� but also showed that this vandalism was often corrected so quickly that it had little effect upon the entire timeline of a page. It also revealed the basic instability of the wiki format, given that sites rarely converged on a stable state or saw times between edits increase. Initial contributions were also less likely to be removed, effectively setting the general tone for the article at the outset. Fig. 3.1: History Flow view of the Wikipedia page on Evolution

In addition to these new hypotheses about Wikipedia (later confirmed by statistical analysis), the process of browsing through edit histories gives an important implicit understanding of how a wiki works to the user. The authors wrote that “without the aid of history flow, it would have been a daunting task to piece together the collaboration patterns

15


[we found].” History flow is thus an excellent example of the advantages of using visualization to give spatial encoding to data that is inherently non-spatial (in this case, time and social interaction). Given the collaborative, cyclic re-editing of architectural designs, some of the methods described above might be useful in a design environment to better understand the patterns of activity in a team, and how designs are being produced, as well as to understand the branching of different options of a design project and how they differ. Architects are not as used to visualizing the topology of the collaborative creation process as are programmers or other collaborative authors, and would be well served to explore this area of visualization. (Viégas et al 2004)

3.2: FoldIt FoldIt was developed in 2008 at the University of Washington in a collaboration between researchers at the School of Computer Science & Engineering, and at the School of Biochemistry. It is a game created to help solve one of the most complex problems facing the life sciences – that of predicting the folded structure of proteins. The biomechanics of protein folding are still not understood completely, and the many degrees of freedom involved make algorithmic solutions difficult to devise and optimize. Even the best algorithms developed routinely got “stuck” in local optima that could be easily discerned and corrected by even a naïve viewer. Given these issues, a team at UW attempted to harvest the native visual and spatial processing abilities of humans by designing a game in which users competed to fold increasingly complex proteins as efficiently as possible. The goal of the project was twofoldfirst, to determine how effective this approach could be versus automated solutions, and also 16


to use any novel solutions to further develop protein models and folding algorithms. The experiment was immediately popular, with hundreds of thousands of downloads, and within a year had produced several winning entries at the Critical Assessment of Techniques for Protein Structure Prediction (CASP), the annual “World Series” of protein folding conferences. Though this approach was hailed as a triumph of “crowd sourcing,” one of the researchers, David Baker, had a slightly different reading. “When I said early on that I hoped Foldit would help me find protein-folding prodigies, it was hopeful speculation. It's fantastic to see it come true." This project differed from previous crowd-sourcing strategies that members of the development team had used in the past, such as Rosetta @ Home, a screen saver that used computer downtime to work on protein folding problems. This project instead used a combination of rewarding interaction design and a competitive social environment to source individuals that were talented in precisely the visual-spatial tasks necessary for the problem – essentially a talent search. In addition to this, the game design itself made the vast majority of the scientific background data unnecessary – correct moves “just [look] right,” as one thirteenyear old “prodigy” explains. FoldIt uses a great deal of methods to not only give a clear visual explanation of the problem, but to guide the user to good solutions, enable fluid interaction, and allow immediate reference to important data and communication from teammates and competitors. Some of these have been broken down below: Problem Visualization: the protein structure is shown clearly, with no unnecessary geometry to confuse the problem. Some color coding is used to show the state of the model, but the most useful feedback comes in the form of animated glyphs that show areas that are 17


folded incorrectly. Ghosting is used to suggest moves that might improve the score. There is also auditory feedback that supplements the visual cues. Performance is indicated clearly by a simple score, and there is also a visual history showing changes in the user's score over time.

Fig 3.2: FoldIt Interaction: FoldIt provides sophisticated but intuitive tools for modifying the protein structure. First of all, an undo function eliminates the fear of the user “breaking” the model by making a change. There is also a tool to selectively freeze certain chains to limit the degrees of freedom in the model. Most importantly, there are limited stochastic search tools that can “wiggle” certain parts of the model to automate a search for better solutions. Communication: the game uses multiple methods to foster a sense of community within its user base. In addition to a wiki-based documentation and user forum, the game itself has a leader board that is updated in real time, and a chat function to enable team member communication while staying within the UI of the game itself.

18


The combination of the methods enumerated above produce a game environment that fosters a supportive, communal approach to problem solving while maintaining a competitive environment. The tool itself is complicit in creating the innovative atmosphere that has produced novel solutions to complex problems. (Dartnell 2008, Bohannon 2009, UW Center for Game Science)

3.3: Utile The work of the Boston-based firm Utile has shows a sensitivity to interaction design and the incorporation of heuristics into design strategies in the service of seemingly everyday issues of profit, space planning, zoning, and sustainability, that warrants an extended look. “Heuristics” in this case represent both architectural rules of thumb (such as 30-foot column grids) as well as more sophisticated rules based upon related fields, such as development proformas or zoning codes. Working in concert with agencies or developers, Utile develops feedback mechanisms that “score” designs based upon a specific metric that is of importance to the client – resale value, LEED score, profit, flexibility, etc. These heuristic mechanisms, “baked in” to BIM models to provide immediate feedback on design changes, allow for the quick understanding of the limits of certain strategies at an early stage, narrowing the design search space at the beginning to forms that will ultimately meet the base requirements of the client. This initial limiting ultimately frees the architect to freely make decisions downstream, as the client is less likely to demand changes on a developed design due to some perceived lack of performance. Enabling a project to communicate it's performance in the language of the client is also helpful in getting a client engaged with a project, and enables quick collective decision 19


making.

Fig. 3.3 A quick schematic massing model and corresponding “information dashboard” showing property values derived through an integrated pro-forma. The office also uses graphic design as a direct communication tool with current and future client, producing information dashboards, infographics, and richly detailed maps to supplement standard architectural presentation. These documents act partially as persuasive illustrations, but also as a visual documentation of the algorithmic work hidden in their digital models, to explain the meaning behind the more abstract data provided by their methods. In recognition of this work, recently Utile has been hired directly by city agencies such as the Boston Redevelopment Authority and Massport to redesign or reinterpret the zoning code itself. Utile's response to these projects is to define the code in such a way that market pressures reinforce the desired interpretation of the code, such as determining the intended use of a parcel by its geometry and scale, and then using “backwards” development pro-formas to calculate the maximum and minimum building heights necessary to make a profit on that building type. This effectively stimulates development at no cost by removing barriers such as 20


required variances, while still providing enough guidance to growth (through the use of formbased codes and use restrictions) to ensure a healthy urban fabric.

Fig. 3.4 FAR and floor plate sized in an urban plan derived through knowledge of development profit limitations.

21


Tim Love, a principal at Utile, has also experimented with using heuristics and feedback as didactic tools in a Yale design studio. He provided students with an “Urbanism Starter Kit� at the first meeting, a series of rigidly standardized building typologies and a (somewhat tonguein-cheek) stripped, high-modernist appearance. The student's first task was to arrange these given structures like building blocks on a given site, with regulations on the allowed proportions of each type, as well as real-world code restrictions such as occupancy limits and parking requirements. The starter kit came in the form of BIM components, which allowed for the instantaneous querying of the relevant properties of the design in general. Only when these restricted designs had reached a sufficient level of performance were the students allowed to

Fig. 3.5: Urbanism Starter Kit and Studio Limitations

22


modify the structures themselves. Despite severely limiting the freedom of the designs at the most conceptual, schematic phase, the final output of the studio displayed a range and variety that belied its origin. In addition, this limited beginning ensured that the projects maintained a certain level of performance as an urban agglomeration, allowing for direct comparison of different solutions as well as freeing the discussion of the projects from purely practical concerns. Finally, the act of struggling against the given constraints was in itself a valuable experience for the students, as it introduced a range of strategies to overcome the limitations of the given structures that, while common in practice, are rarely explored in academia.

3.4: Autodesk R&D: Galileo, Nucleus Vasari Recently Autodesk, the dominant design software company in the US, with nearly $2 billion in revenue in 2010, released demo software that reveals their R&D is investigating lightweight schematic design software with powerful analysis and quick feedback. Project Galileo is a conceptual design tool for infrastructure environments that incorporates an immersive three dimensional design visualization, while maintaining a shallow learning curve. Galileo rapidly creates a contextual city model using GIS, CAD, BIM, and aerial photography files, and presents the data using a visual language immediately comprehensible to any person familiar with Google Earth. The software then allows the rapid creation and comparison of multiple siting and infrastructure scenarios. The geometry creation tools in the program involve multiple levels of heuristic support, for example berming earth at the proper slope around a ramp, or incorporating proper turning radii for streets and railways. Finally, the

23


view navigation tools are sophisticated and intuitive, with “walking” and “flying” options that limit the camera's degrees of freedom, making it easier to turn the control over to a client, community member, or teammate unfamiliar with the software. Galileo favors collaboration and speed over precision or feature flexibility, making it an effective infrastructural sketching tool for collaborative design.

Fig. 3.6: Autodesk Project Vasari

Project Nucleus involves the integration of Maya's Nucleus physics engine into Autodesk's flagship BIM software, Revit. This enables form-finding experimentation directly within the BIM software, using methods such as gravity, wind, constraints and collisions. This plugin not only represents a direct method of visualizing unseen data (material characteristics 24


and physics) but also uses motion to describe the data, a technique that until now was missing from Autodesk's architectural design software. The incorporation of a physics engine also marks a fundamental shift in the modeling methodology within Revit, as it allows for the examination of emergent or unexpected conditions as the result of virtual “experiments� with physical forces. This represents a certain relinquishing of control within the software that begins to push the interface in game-like ways. Nucleus has been incorporated into the latest version of Vasari, a conceptual design environment using the Revit platform that represents the farthest reach of Autodesk towards data visualization and game interfaces in architectural design software. Vasari is based upon a limited version of Revit that is missing many of the tools less important in conceptual design. This smaller package is provided in a simple binary executable, without requiring a complex installation procedure, making the software inherently portable and sharable. In addition to the Nucleus engine, Vasari also directly incorporates energy and carbon analysis tools, allowing for immediate feedback (with accompanying data visualization) on insolation, daylighting, wind roses, and thermal performance. The tool allows for immediate performance comparison of multiple options, and saves in formats directly compatible with full-featured versions of Revit. With the above tools (which represent almost all of their released R&D tools for architectural software in the last year) Autodesk is showing a commitment to exploring software that is lightweight, responsive, and collaborative. Most of the new features involve incorporating or visualizing non-geometric data, or managing and exploring multiple options in a collaborative environment. There is a also a clear attempt to make the software simpler and more playful, even if it means sacrificing flexibility or additional features. The above 25


characteristics, which have more in common with web applications and game interfaces than CAD software, represent (at least for Autodesk) the future of digital design. (Autodesk Labs)

3.5: Sawapan / Adams Kara Taylor Adams Kara Taylor (now known as AKT II) specializes in engineering consultancy on projects that feature complex, nonstandard geometries and structural requirements. Within the firm, the parametric applied research team (p.art) investigates advanced computational strategies that bridge between form and performace. Sawapan, consisting of Panagiotis Michelatos and Sawako Kaijima, work as a part of this team, as well as as an independent group (Sawapan) developing software and methods for form-finding and design exploration. Fig. 3.7: Structure densification studies for the land securities bridge

The work within both p.art and Sawapan explores a subset of computational strategies

26


that are based upon implicit structural and geometric data within a project. For instance, one project involving the Land Securities Bridge designed by Future Systems looked at methods for optimization of a given geometry through an in-depth analysis of the forces and moments produced by the structural analysis software. Whereas the usual engineering approach would have been to determine the proper reinforcement or sizing for given members, p.art looked at iteratively modifying the density of the pattern itself. For other projects involving complex geometry in exterior panelized systems, p.art developed custom software to allow the browsing of geometric properties within panels in the surface, helping to make the consequences of the geometric modeling and discretization strategy more understandable.

Fig. 3.8: topostruct

27


Sawapan has taken the incorporation of implicit rules a step farther, developing software to allow the real-time exploration of topics such as topology optimization, complex frame structures, and spring-force based graph layouts. These tools incorporate rich visual (and occasionally auditory) feedback, using strategies such as glyphs and motion mentioned above, as well as an interactive immediacy that promotes intuitive exploration. Their “topostruct� topology optimization tool, for instance, uses clear visual indicators to describe the initial set up of fixed points and forces, and derived properties such as deflection and stress are indicated in a clear manner, through motion and streamlines, that make intuitive sense, even to designers without an engineering background. These projects suggest a further subset of computational design that focuses on quantitative analysis as the basis for certain design decisions. The methods always stop short of being holistic or totalizing, however; quantitative methods are used to derive solutions for focused strategies, in the service of a larger design intent. The danger of most computational strategies is in the possibility of a naiive or simplistic application of the form-finding result. The output of even a sophisticated form-finding tool is unlikely to take into account the design of connections, or the limitations of construction at a particular scale with a particular material. The work described above takes an approach of divorcing the quantitative method from a specific geometric suggestion; a process of de-spatialization that attempts to convey the invisible forces and potentials in a material organization without devolving into a method for automating formal exploration. (Sakamoto et al. 2007)

28


4. Research Projects The above examples encompass a wide range of topics, from pure data visualization, to client communication, structural optimization and digital design workflows. The ideas encompassed represent a subset of computational design focused on flexible methods for integrating quantitative information into a design process as early as possible. These methods combine the fundamentals of data visualization (clear visual feedback) with immediate, playful interaction, to produce environments where designers can creatively search the boundaries of a quantitative design space without being relegated to pure optimization. Using games as a conceptual framework suggests new ways these methods might be recombined and used within a practice to guide a project from the outset, helping to focus decisions without committing a design to a purely quantitative agenda. The following projects attempt to explore and stretch the possibilities of game-like design environments, extrapolating from known methods to find new opportunities. The first, treemapping, explores the advantages and limitations of data visualization methods. The second, a spring algorithm game, looks at ways of making interaction and feedback as immediate as possible, while the third project, a tensor field design tool, researches the

29


possibilities of collaborative, interactive environments in a graphic design process. 4.1 Treemapping Architectural Program Documents This project started as an attempt to use graph visualization to show the implicit room organizations of existing building plans. This idea has been explored recently by groups such as Aedas R&D and a team at the University of Sydney, but the history of graph representation in Architecture stretches back at least 50 years, with the initial apotheosis occurring in the mid

Fig. 4.1: Adjacency graph diagram by Philip Steadman

1970's at Cambridge University, in research labs aimed primarily at optimizing architecture for walking distances. One early example is work by Philip Steadman at CU in 1973 on “GraphTheoric Representation of Architectural Arrangement.” This work attempted to make a “library” of all possible topological arrangements in a plan diagram for a certain number of rooms. Also 30


of note in the same program in 1972 was the work of Philip Tabor and Tom Willoughby on walk distance optimization, using a combination of graph theory and a traveling salesman algorithm. While the work done by the CU researchers was rigorous and interesting, ultimately walking distance faded as an area of research, partially because its optimal organizations often flew in the face of other architectural requirements (such as constructability) but also because the issue of adjacency was more easily solved through telephone and intercom systems. Sean Keller's 2005 Harvard GSD doctoral thesis “Systems Aesthetics: Architectural Theory at the University of Cambridge, 1960-75” explores this history in depth. Compounding this historical warning flag was the fact that producing graph layouts of existing structures is actually a lossy, reductionist way of showing floor plan information - one early reviewer called the idea "idempotent." There is also some inherent flaws in the method, such as how to represent circulation space, particularly looped hallways. The topology of a building's room layout is invariably more complicated than a simple graph layout can really indicate, and usually some important detail is lost in the conversion. (Keller 2005, 2006) I looked instead more closely at visualizing nonspatial data-- in particular, the labyrinthine, detailed program requirement documents that are often handed to architects as an a priori requirement at the start of a project. On review, it became quickly clear that adjacency doesn't really play a strong part in program requirements, and where it is mentioned, relationships are unclear (adjacency can be defined by nearness, visibility, connectedness, etc). There is also a fundamental issue of how to indicate circulation space and atria, which are rarely indicated in program documents and make graph layout particularly complicated as they have a tendency to “collapse” tree structures if shown as a node. I chose 31


instead to take advantage of the basic structure of the document - as a series of nested groups and to visualize this ownership hierarchy using a treemapping method instead of some imagined or invented connectivity. Doing this also avoided a major pitfall of program visualization, which is that bubble diagrams or adjacency graphs can be too suggestive of a final form, leading to the architect "building the diagram." This method takes maximum advantage of the spatial encoding of data while being as neutral to the idea of a building's form as possible.

Fig. 4.2: Treemapping current events at newsmap.jp

32


Treemapping is a method for displaying hierarchical data using nested rectangles. As such it is particularly well suited for data sets that have cumulative scalar component, such as size or area. It is related to other area-based visualizations such as Marimekko diagrams and mosaic plots, with the added feature of a recursive construction that shows an ownership hierarchy. Invented by Ben Schneiderman at the University of Maryland in the early 1990's, it has been further developed over the last 20 years with increasingly sophisticated algorithms for dividing the recursive spaces, to allow for low (more square) aspect ratios, which allows for easier size comparison within the map. Treemaps have previously been used in computer science to explore file structures, and in journalism and finance to explore the relative size of companies or the relative importance of current events. My project started with taking an existing program requirements document – in this case for a public middle school – and rationalizing the room and space numbering system to better represent a tree structure (for instance, the room addressing system 1.23.3 was altered to read 1.2.3.3 to be more clear to the parsing software). The cleaned data was then converted

Fig 4.3: Opening view of treemap

33


into a tab-separated value (.tsv) file. The software to produce the treemap was written in the Processing language, a high-level visual language built on top of Java. The treemapping itself was handled by a library provided by Ben Fry, which was adapted from Martin Wattenberg and Ben Bederson's Treemap Java Algorithms collection, released under a Mozilla public license. I chose to use a “squarified” subdivision algorithm to produce the treemap, developed by Martin Wattenberg in 1999. (Fry 2008, Fry web, Shneiderman web).

The file is initially displayed at the first level of detail, with additional information displayed when the mouse hovers over a region. Immediate child groups of the level you are at are shown with larger labels at the bottom of the space. Clicking within the spaces opens them to reveal the child groups and rooms within. Each level down the group or room gets lighter, suggesting its depth in the tree. Area and capacity (for rooms) is also shown if it fits – capacity by drawing a darker box within the room itself. Finally, small white tags appear at the upper right hand corner of a group or room if there is a note added. The outline view at the right hand size shows a list of the groups and rooms that are visible. The mouse-over shows information about the lowest level space or room that is visible.

34


Fig. 4.4: Drilling down

After the entire depth of a space has been revealed (down to room level), an additional click will zoom the treemap onto the next level down in that area. Repeated clicking will allow zooming to the lowest group level in the map. Right clicking zooms back out to parent groups, and when at the top level will “close� the groups again, hiding child groups and rooms. Fig. 4.5: Zooming in

Finally, hitting the enter key while hovering over a space will open a text box that will allow a user to add a note to that space. Adding a note puts a tag on the space, and a mouseover will show the note as well. The modified information can then be saved back to the original .tsv file.

35


Illustration 1: Fig. 4.6: Annotation

This project shows the limitations of a pure data visualization approach in a design setting. While the tool does quickly reveal programmatic relationships and size and capacity relationships, the lack of an ability to interactively edit the tree structure or rearrange the spaces makes the tool less useful after the design moves beyond the initial research stage. In addition, the treemapping method is not inherently suggestive of the architectural layout, which is helpful in that it prevents the diagram from being too formally suggestive, but also hides much of the important programmatic information (adjacency, room shape) if used to visualize an existing design. In the end, such a visualization has such focused utility that it is better developed as a tool within a larger software environment.

36


4.2 Springlinkage As an initial exercise in exploring interaction methods, I chose to design an tool where the implicit environmental rules are provided by a force-directed (spring) algorithm. This algorithm works upon networks of nodes, connected by edges, that interact with each other using a system of repulsive and attractive forces, as if the edges were springs and the nodes electrically charged particles. The graph organization of edge-node relationships, and the counter-graph of the repulsive forces, as well as the external application of additional forces (such as gravity) allow for a wide range of behaviors. Spring algorithms use an integrative method to find the least-energy state of the graph layout. Put in more simple terms, at each step, the nodes are individually moved a tiny step in accordance with the local forces contingent upon that node. The entire network is updated repeatedly until the forces on the nodes are reduced to an equilibrium state. Spring algorithms have many advantages for architects. They are inherently spatial, which makes finding proper applications a fairly direct process. As they are based upon physical rules (Hooke's law for springs and Coulomb's inverse-square law for charged particles), the behavior of the graph network is intuitive. Spring algorithms are also relatively robust and deal well with a wide range of inputs. In architecture they can be used for distributing points and lines over a surface or for pattern generation, but their most common use is in interactively modeling tensile, tensegrity, and shell structures. The latter is done by reversing the gravity of the system and finding the least energy result – essentially a computational form of a hanging chain model. I was interested in creating an interface that was entirely additive, making the algorithm 37


an implicit part of the environment at all times. Computational design methods and software rarely use additive interfaces, as they require a lot of additional programming, design and support. This sort of approach is required, however, if the goal is a highly interactive environment that enables intuitive exploration of a system or algorithm. Currently in most design projects that involve computation, you see a lot of “set up and run” situations that involve a myriad of constants, sliders, and initial geometric setup. This project explored the alternative, which is a step-by-step additive process in which the computational component is embedded within the design environment, providing implicit rules that guide a design at every point. Essentially, these interfaces provide methods of “closing the loop” of iterative computational processes, similar to stochastic methods such as genetic algorithms or selected annealing. The difference is that here the search takes place within the user's visual cortex, and decisions are made intuitively, not automatically. I chose, once again, to write this program using Processing, as it had a force-directed algorithm library already available (the “Physics” particle system engine by Jeffrey Traer Bernstein). As Processing is based upon Java, there is also the opportunity for easy sharing via the internet, or even the possibility of producing multiuser versions of the program by porting it to javascript and utilizing canvas and websocket technology available in HTML5.

38


Fig. 4.7: Two dimensional spring tool

I started with a two-dimensional version of the program to simplify the process while I was experimenting with interaction methods. The graphics chosen were very simple grayscale representations, to make the movement of nodes as clear as possible. Most uses of forcedirected algorithms start with a given set of points or objects to be worked with. The initial strategy I used for additive graph generation was a circle surrounding the cursor that described a radius within which any added node would automatically be connected. To simplify the network and give the graph representation a certain flexibility, repulsive forces are added as a complement to the edge spring forces, essentially making the nodes act as semi-rigid frame elements. This allowed for the additive generation of triangulated structures that maintained a 39


limited rigidity, but could be driven to failure if the stresses were high enough. The added nodes could either be free to move or fixed in space, which provided some additional flexibility (particularly when playing with hanging chain models). I also added options to draw links manually between nodes, and to delete links and nodes by right-clicking on them. I also provided a panning function that provided an unlimited canvas for the user to work upon, and the ability to control several constants in the system, such as spring force or size and gravity.

Fig. 4.8: Three dimensional spring tool

Once this additive interaction method had been developed, a three-dimensional version was written. In the interest of keeping the graphics as similar as possible, custom axonometric projection code was written for the 3d rendering, instead of using available 3d rendering 40


libraries such as OpenGL. The initial “additive radius” method for generating edges was dropped as it was unwieldy in large three dimensional structures, in favor of a point-to-point method in which links are manually drawn, and the software interpolates additional nodes to make longer chains. This method worked well with the primary forseen use of the threedimensional version, which was the additive exploration of hanging chain structures. When the program was shown to friends and colleagues, the difference in apparent utility between the 2-D and 3-D versions of the program was striking. Most people saw the 2-D version as a game or toy, or perhaps as an educational tool, while the 3 dimensional version was more easily seen as a method for interactively exploring hanging chain structures. This may be due to the fact that 3D axonometric views are easily associated with CAD or BIM software, while 2D side-scrolling views are almost entirely found within the domain of arcade games. Even given identical functionality, when viewed one became a tool and the other a game. This suggests that our perception of the utility of a program has less to do with functionality than it does with graphics, projection, and interaction methods. A program for use in the workplace is expected to have a “serious” looking user interface that may be less intuitive or enjoyable than one provided for recreational software. This is a hurdle that begs further examination if game interface methodologies are to be used in architectural practice.

4.2 TensorPatterns This third project looked to combine some of the lessons learned in the previous two, to research using implicit rules and an additive interface to produce controllable patterns. This

41


project avoided the use of computation for direct optimization, to explore the possibility of a game-like interface for less constrained design tasks. Fig. 4.9: Tracing tensor field streamlines

I chose tensor field streamlines as the basis for this research project. Tensor fields and streamlines are used extensively in medical visualization, flow dynamics research, and computer modeling due to their implicit clarity and ability to be edited procedurally while maintaining integrity. Tensors, as they are applied in this project, can be understood as a “bundle� of vectors. Our project is working in two dimensions, and is using two vectors that are at right angles to one another, thus making it essentially similar to a pure vector field. However, the two vectors need not be at right angles, and can have different magnitudes, which allows for additional effects when generating streamlines. Tensor fields are a generalization of scalar

42


and vector fields, meaning that they apply a tensor to each point in space on a manifold (in this case, a simple 2d plane). Given that a tensor field exists at every point in a manifold simultaneously, there are multiple methods for showing the tensor direction and scale across the manifold. Streamlines are a common method that involves picking a seed point and then using integration to follow a path until an end condition is met. Streamlines are commonly used to show certain patterns in MRI scans, flow directions in fluid and wind dynamics, and in structural analysis to indicate primary stress directions. It is also commonly used in 3d modeling to remesh surfaces. Recently a team consisting of researchers from Oregon State University, Arizona State University, and ETH Zurich presented software at Siggraph 2008 that used tensor field streamlines to procedurally generate street patterns. I chose to work in a similar field, generating “Nolli� style maps, due to the immediate comprehension of goals, limitations, and scale that it provides. The methods described below would work equally well on problems at other scales, however, such as the discretization of surfaces for complex panelized facade surface, or for simple pattern generation. (Jobard and Lefer, 1997, Zhang et al. 2007, Chen et al. 2008) The pipeline for this project involves a five-step process, which is explained in detail below. First, intial input in the form of lines and singularity points are provided by the user. The program then generates a tensor field using this input. The next step traces the streamlines starting at seed points also given by the user. Once all of the streamlines have been generated, the program finds complete regions within the network and assigns them a bitmap based upon both user input and geometric information about the region itself. The regions are then combined based upon given radius information provided with the bitmaps, and finally the 43


entire network is rendered to the screen, using a 2d or 3d projection method. At this point there is the option to export the completed network in DXF format to enable interoperability with other programs. An automatic peer-to-peer system also allows multiple users on the same local area network to work on the same project simultaneously in real time, without needing to refresh or update. Fig. 4.10: Pipeline

Tensor Field Generation Tensors in this project are composed of a 2x2 matrix, which describes major and minor eigenvectors within that tensor. In this project, those eigenvectors are perpendicular; thus tracing streamlines within the tensor field will result in a network where all of the intersections are also perpendicular. The direction of these eigenvectors is controlled by user input in the form of lines, polylines, and singularity points. Lines and polylines affect the tensor field in a direct manner, by rotating the vectors to be parallel with the line segment. Singularities have different effects based upon their type. Radial and Node singularities produce a network with 44


concentric lines in one vector direction, and radial lines in the other. The user also has the option to produce wedge, trisector, and saddle singularity forms. The generation of the field from these points and lines uses an exponential decay function to allow for a smooth interpolation between vector directions. Singularities are also given the option of having a sharper concentric boundary, at the user's discretion. For more information on the particular matrix calculations and functions used to generate the tensor field, please consult the appendix of this thesis, or Zheng et al. ,2007. Fig. 4.11: Tensor field singularity types

Streamline Integration Once the tensor field has been generated, streamlines are placed by an algorithm that attempts to create an evenly-spaced grid across the entire field. This is done using a modified version of the Jobard and Lefer algorithm developed in 1997 at the Laboratoire d'Informatique du Littoral. This algorithm first starts with an initial seed point and uses a modified Euler integrator to trace a streamline along the major eigenvector in both directions in until it reaches a boundary point. A new seed point is then generated a a given distance (dsep) from the initial seed, perpendicular to the streamline, and a minor eigenvector is traced in both directions. This process is then repeated, alternating the tracing directions between major and minor eigenvectors. If, at any point, the streamline being traced comes a certain distance (set

45


up as a percentage of dsep) than the streamline is stopped. The search for nearby streamlines is sped up by grouping the points in each streamline into “buckets� arranged in a grid on the tensor field. The search algorithm looks only in the 9 square area surrounding the point itself. If the generated seed point is within this distance, than new seed points are generated progressively farther down the streamline until a valid one is found. This process is repeated until no more valid seed points can be generated. For more information on the algorithm used, please consult the appendix, Jobard/Lefer 1997 or Chen et al., 2008 Fig 4.12: Streamline network generation

Region Generation Once the streamline network is complete, a second set of algorithms find the interior regions created by the network and produce a triangulated surface within each region. The first step in this process produces a bitmap of the streamline network. A recursive flood fill

46


algorithm then finds each separate interior area, fills it with a unique color that serves as a tag for that area, and stores bounding box information for these regions. A Moore neighbor tracing algorithm then traverses the outer edge of the fill, making a list of points that describe the boundary of the region in question. The bounding box and list of points are then placed in a dictionary data structure where the key is the tag generated earlier. This allows for rapid lookup of region information given a specific point in space. These algorithms can be found in the appendix. Fig. 4.13: Region generation

Region Layering The first step in rendering the above information is generating the textures that will be mapped onto the generated regions. This is done very rapidly on the graphics card of the computer by using OpenGL pixel shaders. Graphics shaders are particularly efficient because

47


they use the parallel processing abilities of graphics cards. This process also uses user input, in the form of bitmaps attached to a point that has a given radius of effect. Each region falling within this radius will have the bitmap applied. Multiple bitmaps (up to 4 total) can be combined on a single region. Each region first finds the closest 4 bitmaps provided by the user whose area of influence contains the region in question. The pixel shader than combines the four bitmaps into a single texture via an additive layering process and applies the result to the region in question. Regions that are outside the boundaries of all provided points or that are particularly small (more than a quarter the average size) are left black. Particularly large regions (more than 4 times the average area) are left open to represent parks.

Fig. 4.15: Perspective View

48


Rendering & Annotation The regions, streamlines, and control lines and points are rendered using OpenGL. This allows not only the use of the pixel shaders, but provides for the opportunity for nonorthogonal projections of the surface. Extra methods have been provided to experiment with this, such as perspective views and a hyperbolic zoom. There is also a notation function that allows a user to attach a text note to a particular area of the streamline field. Finally, the polylines, quads, and notes can also be exported in .DXF format.

4.3 Conclusions The responsive and intuitive quality of interaction with this tool suggests that tensor field-based methods are a flexible and useful way of controlling networks of lines globally with a minimum of control points, while maintaining required alignments and relationships. As stated above, other uses for this tool, particularly in panelizing or discretizing surfaces, deserve further study. Currently the tools available for discretizing complex surfaces rely on either simple projections, geodesic lines, or the initial isoparametric information. This tool would allow for the incorporation of additional input to satisfy requirements not directly related to geometry, while maintaining a “basis� tensor field that was generated by some geometric requirement, such as geodesic lines, stress tensors, or lines of steepest descent. It can also be used as a novel way to convert images into a line network or mesh for fabrication purposes. Finally, the ability to easily incorporate a peer-to-peer networking system without having to worry about parent-child relationships or directed graphs is powerful – any designer working

49


withing a parametric or CAD system knows that one of the the biggest impediments to collaborative design is the necessity to “lock� whatever region of the project is being worked upon. The more flexible peer to peer method used in this project allows for updates to happen without refreshing, as the communication between nodes in the network is fairly minimal (consisting of the tensor field and region basis points, and the notes), and the rest of the information about the pattern is generated on the local machine. While this method of collaborative design might not be possible yet for complex parametric design software, it is certainly an admirable goal. The precedent research, case studies, and tools written during this process suggest strongly that, with the powerful capabilities of new computers and graphics cards, the incorporation of real-time feedback into design interfaces is almost a certainty. There is also a trend to provide analysis and communication tools directly within design software packages, as an inherent component or as a plug-in. It takes only a small leap of imagination to look at a tool such as Autodesk Vasari and imagine design software that is more responsive, collaborative, and quantitative. As these design environments become more immersive and data-rich, the interface and visualization methods must adapt to allow the querying analysis of data. The incorporation of implicit rules and rapid feedback into such a system would allow the designer to intuitively incorporate quantitative information into a design without separating the algorithmic process from the interactive process. The tools described above represent one strategy to insert human control deep within an algorithmic process, using the maximum capability of both designer and machine.

50


5. Image Credits Chapter 1 Fig. 1.1: Parthenios, P. (2005) Conceptual Design Tools for Architects. M.DesS Thesis, Harvard GSD. Fig. 1.2: http://www.iaacblog.com/digitatools/2010/12/ecotect-final-project-by-diego-lopez/ Chapter 2 Fig. 2.1 : http://home.comcast.net/~jpittman2/pacman/pacmandossier.html Chapter 3 Fig. 3.1: http://www.research.ibm.com/visual/projects/history_flow/ Fig. 3.2: Image provided by the author. Fig. 3.3: Image courtesy of Utile. Fig. 3.4: Image courtesy of Utile. Fig. 3.5: Image courtesy of Utile. Fig. 3.6: Image courtesy of Matt Jezyk and Autodesk Labs Fig. 3.7: Image courtesy of Panagiotis Michelatos / Sawapan 51


Fig. 3.8: Image provided by the author Chapter 4 All images provided by the author, with the exception of: Fig. 4.1: Keller, S. (2006) Fenland tech: architectural science in postwar Cambridge. Grey Room, 23 Fig. 4.2: http://newsmap.jp Fig. 4.9: Mebarki, A, Alliez, P and Devillers, O (2005) Farthest Point Seeding for Efficient Placement of Streamlines, IEEE Visualization Conference 2005 Fig. 4.11: Zhang, E, Hays, J and Turk, G (2007) Interactive Tensor Field Design and Visualization on Surfaces. IEEE Transactions, Vol 13 No. 1

52


6. Bibliography

Alex, J. Hybrid Sketching: A New Middle Ground Between 2- and 3-D. PhD Thesis, Massachusetts Institute of Technology. Autodesk Labs Website, http://labs.autodesk.com/ Bohannon, J (2009) “Gamers Unravel the Secret Life of Protein�. Wired 17-05 Buxton, B. (2007) Sketching User Experiences, Morgan Kaufman Chen, G. et al. (2008) Interactive Procedural Street Modeling, SigGraph 2008 Dartnell, L. (2008) How online games are solving uncomputable problems. New Scientist 2681 Fry, B. (2008) Visualizing Data: Exploring and Explaining Data with the Processing Environment. O'Reilly Media. Fry, B. Treemapping Website. http://benfry.com/writing/treemap/

53


Gerber, D. J. (2007) Parametric Practices: Models for Design Exploration in Architecture: Volumes I & II. D.Des Thesis, Harvard GSD. Jobard, B and Lefer, W. (1997) Creating Evenly-Spaced Streamlines of Arbitrary Density, Labaratoire d'Informatique du Littoral Keller, S. (2006) Fenland tech: architectural science in postwar Cambridge. Grey Room, 23:40-65. Keller, S. (2005) Architectural Theory at the University of Cambridge, 1960-75. PhD thesis, Harvard University. Mebarki, A, Alliez, P and Devillers, O (2005) Farthest Point Seeding for Efficient Placement of Streamlines, IEEE Visualization Conference 2005 McGonigal, J. "Jane McGonigal: Gaming can make a better world | Video on TED.com" http://www.ted.com/talks/jane_mcgonigal_gaming_can_make_a_bet ter_world.html Norretrandors, Tor. (1998) The User Illusion. Viking Parthenios, P. (2005) Conceptual Design Tools for Architects. M.DesS Thesis, Harvard GSD. Pittman, J. "The Pac-Man Dossier" http://home.comcast.net/~jpittman2/pacman/pacmandossier.html Sakamoto, T., and Ferre, A., editors. (2007) From Control to Design:

54


Parametric/Algorithmic Architecture. Actar-D Shirley, P. et al (2009) Fundamentals of Computer Graphics, Third Edition, AK Peters. Shneiderman, B et al. (2009) Designing the User Interface, Addison Wesley Shneiderman, B. Website, Treemaps for space-constrained visualization of hierarchies�, http://www.cs.umd.edu/hcil/treemap-history/ Steele, J and Iliinsky, N., editors (2010). Beautiful Visualization: Looking at Data through the Eyes of Experts (Theory in Practice) O'Reilly Media. Sutherland, I. (1963). Sketchpad: a Man-Machine Graphical Communication System. Technical Report 296, MIT Lincoln Lab. Tidwell, J. (2005) Designing Interfaces: Patterns for Effective Interaction Design. O'Reilly Media, Inc. Tufte, E.R. (1986). The Visual Display of Quantitative Information. Graphics Press. Tufte, E.R. (1990). Envisioning Information. Graphics Press. Tufte, E.R. (1997) Visual Explanations: Images and Quantities, Evidence and Narrative. Graphics Press. UW Center for Game Science, FoldIt website. http://www.fold.it/ ViÊgas, F, Wattenberg, M and Kushal, Dave (2004) Studying Cooperation and Conflict between Authors with history flow Visualizations, CHI 2004 55


Ware, C (2008) Visual Thinking For Design. Morgan Kaufmann. Woodbury, R., Gun, O. Y., Peters, B., and Sheikholeslami, M. (2010) Elements of Parametric Design. Routledge. Zhang, E, Hays, J and Turk, G (2007) Interactive Tensor Field Design and Visualization on Surfaces. IEEE Transactions, Vol 13 No. 1

56


7. Appendix A. Tensor Field Generation and Streamline Integration Code

// generate streamlines private void Setup() { // get lists ready sl0.Clear(); sl1.Clear(); seeds.Clear(); Vector2d seed = m.seeds[0].p; dtest[0] = dsep[0] * septol; dtest[1] = dsep[1] * septol; //set up limits if (m.lim[1].p.X - m.lim[0].p.X < 50.0) m.lim[1].p.X = m.lim[0].p.X + 50.0; if (m.lim[1].p.Y - m.lim[0].p.Y < 50.0) m.lim[1].p.Y = m.lim[0].p.Y + 50.0; x0 x1 y0 y1

= = = =

m.lim[0].p.X; m.lim[1].p.X; m.lim[0].p.Y; m.lim[1].p.Y;

dx = (int)Math.Floor((x1 - x0) / 10) + 1; dy = (int)Math.Floor((y1 - y0) / 10) + 1; //intialize testpoints for (int k = 0; k < testpoints.Length; ++k) { testpoints[k] = new List<Vector2d>[dx, dy]; for (int j = 0; j < dy; ++j) { for (int i = 0; i < dx; ++i)


{ }

testpoints[k][i, j] = new List<Vector2d>();

}

} // initial streamline int cnt = 200;//safeguard for infinite loops while (sl0.Count == 0 && cnt>0) { cnt--; List<Vector2d>[] initsl; foreach (CPoint sn in m.seeds) { initsl = CreateStreamLine(sn.p, 0); if (initsl[0].Count > 1) { sl0.Add(initsl[0]); } if (initsl[1].Count > 1) { sl0.Add(initsl[1]); } } int c2 = 200; //safeguard for infinite loops while (sl0.Count == 0 && c2>0) { c2--; Vector2d initseed = new Vector2d(Rnd * 200, Rnd * 200); initsl = CreateStreamLine(initseed, 0); if (initsl[0].Count > 3) { sl0.Add(initsl[0]); } if (initsl[1].Count > 3) { sl0.Add(initsl[1]); } }

}

if (sl0.Count == 0) return; // initialize hyperstream seed finding int dir = 1; int slcount0 = 1; int slcount1 = 0; int seedcount = 0; int cyclecount = 0; List<Vector2d> cs = sl0[0]; // Jobard method bool finished = false; bool foundseed = false;

58


//do this until out of streamlines while (!finished) { foundseed = false; // do this for each segment until end of count or seed found while (seedcount < cs.Count - 1 && !foundseed) { // make a point dsep away Vector2d testp = cs[seedcount]; Vector2d testfrom = testp; Vector2d testv = cs[seedcount + 1] - testp; Vector2d poff = new Vector2d(-testv.Y, testv.X); poff.Normalize(); if (seedcount % 2 == 0) { testp += (poff * dsep[dir]); } else { testp -= (poff * dsep[dir]); } // if not close to any existing streamline and is in bounds then use if (pointNear(testp, dir, dtest[dir]) && pointContained(testp)) //if (pointContained(testp)) { seed = testp; foundseed = true; }

}

//increment seedcount seedcount++;

bool pladded = false; // if seed found make new streamlines if (foundseed) { List<Vector2d>[] addsl = CreateStreamLine(seed, dir); // check to make sure more than 1 segment in created List<int>s if (dir == 0) { if (addsl[0].Count > 3) { sl0.Add(addsl[0]); pladded = true; cyclecount = 0; seeds.Add(seed); } if (addsl[1].Count > 3) {

59


sl0.Add(addsl[1]); pladded = true; cyclecount = 0; seeds.Add(seed); } } else if (dir == 1) { if (addsl[0].Count > 3) { sl1.Add(addsl[0]); pladded = true; cyclecount = 0; seeds.Add(seed); } if (addsl[1].Count > 3) { sl1.Add(addsl[1]); pladded = true; cyclecount = 0; seeds.Add(seed); } } } // if List<int> added or no seeds found alternate directions and update cs if (pladded || !foundseed) { if (dir == 0) { dir = 1; if (sl0.Count != 0) { slcount0 = (slcount0 + 1) % sl0.Count; cs = sl0[slcount0]; } seedcount = 0; } else if (dir == 1) { dir = 0; if (sl1.Count != 0) { slcount1 = (slcount1 + 1) % sl1.Count; cs = sl1[slcount1]; } seedcount = 0; } cyclecount++; if (cyclecount > slstep) { Cleanup(); finished = true; } }

}

60


}

// checks to see if point exists within dtest bool pointNear(Vector2d _testp, int _pndir, double _dtest) { int testx = (int)Math.Floor((_testp.X - x0) / 10); int testy = (int)Math.Floor((_testp.Y - y0) / 10); for (int j = testy - 1; j <= testy + 1; j++) { for (int i = testx - 1; i <= testx + 1; i++) { if (i >= 0 && i < dx && j >= 0 && j < dy) { for (int k = 0; k < testpoints[_pndir][i, j].Count; k++) {

}

if (Dist(_testp, testpoints[_pndir][i, j][k]) < _dtest) { return false; }

}

} } return true; } bool pointNear(Vector2d _testp, List<Vector2d> _sl, double _dist) { if (_sl.Count > 10) { for (int i = _sl.Count - 9; i >= 0; --i) { Vector2d slp = _sl[i]; if (Dist(_testp, slp) < _dist) return false; } } return true; } // checks to see if point is within space bool pointContained(Vector2d _testp) { double testx = _testp.X; double testy = _testp.Y; if (testx >= x0 && testx < x1 && testy >= y0 && testy < y1) { return true; } else

61


{ }

return false;

}

Vector2d GetClosetPoint(Vector2d A, Vector2d B, Vector2d P, bool segmentClamp) { Vector2d AP = P - A; Vector2d AB = B - A; double ab2 = AB.X * AB.X + AB.Y * AB.Y; double ap_ab = AP.X * AB.X + AP.Y * AB.Y; double t = ap_ab / ab2; if (segmentClamp) { if (t < 0.0f) t = 0.0f; else if (t > 1.0f) t = 1.0f; } Vector2d Closest = A + AB * t; return Closest; } List<Vector2d>[] CreateStreamLine(Vector2d _p, int _csldir) { int vdir = 0; double _dt = dt; List<Vector2d>[] sl = new List<Vector2d>[2]; for (vdir = 0; vdir < 2; vdir++) { Vector2d vp = _p; // temp copy Vector2d[] ev = GetEigens(vp); sl[vdir] = new List<Vector2d>(); if (vdir == 1) _dt = -_dt; int d1 = _csldir; int d2 = ((_csldir + 1) % 2); Vector2d dir0 = ev[d1]; Vector2d dir1 = dir0; double dd1 = 0.0; double dd2 = 0.0; sl[vdir].Add(vp); // check to see not near another point of same streamline dir bool result = pointNear(vp, _csldir, dtest[_csldir]); while (pointContained(vp) && result && sl[vdir].Count < stepmax) { Vector2d.Dot(ref dir0, ref ev[d1], out dd1); if (dd1 < 0.0) _dt = -_dt; // euler integrator //dir0 = ev[d1]; //vp = Vector2d.Add(vp, (dir0 * _dt)); //modified euler

62


double _dt2 = _dt * 0.5; dir0 = ev[d1]; Vector2d vt = Vector2d.Add(vp, (dir0 * _dt)); vp = Vector2d.Add(vp, (dir0 * _dt2)); Vector2d[] et = GetEigens(vt); Vector2d.Dot(ref dir1, ref et[d1], out dd2); if (dd2 < 0.0) _dt2 = -_dt2; dir1 = et[d1]; vp = Vector2d.Add(vp, (dir1 * _dt2)); if (pointContained(vp)) sl[vdir].Add(vp); // check to see if close to other streamlines of same dir result = pointNear(vp, d1, dtest[d1]);

}

// or to points on same streamline if (result == true) { result = pointNear(vp, sl[0], dt * 0.5); } if (result == true && vdir == 1) { result = pointNear(vp, sl[1], dt * 0.5); } ev = GetEigens(vp);

} // add points to test grid for (vdir = 0; vdir < 2; vdir++) { if (sl[vdir].Count > 3) { foreach (Vector2d addpt in sl[vdir]) { int ptx = (int)Math.Floor((addpt.X - x0) / 10); int pty = (int)Math.Floor((addpt.Y - y0) / 10); testpoints[_csldir][ptx, pty].Add(addpt); } } } return sl; } // generate tensor field and get eigenvecotrs Vector2d[] GetEigens(Vector2d _p) { // Actual tensor basis fields // given direction u,v @ point p0, vector magnitude M, and angle A // T(p) = M*( [cos(2*A)] [sin(2*A)] ) // ( [sin(2*A)] [-cos(2*A)] ) // for circular fields centered at x,y // T(p) = M*( [y^2-x^2] [-2xy] ) // ( [-2xy] [-(y^2-x^2)] )

63


// If you want an exponential decay function with field strength S and decay factor D, multiply by: // S*exp(-D(p-p0)^2) // Computing eigenvectors: // for a matrix // [a, b] // [c, d] // the eigenvectors are: // [-((-a + d + Math.Sqrt(a ^ 2 + 4 * b * c - 2 * a * d + d ^ 2))/(2 * c)), 1] // [-((-a + d - Math.Sqrt(a ^ 2 + 4 * b * c - 2 * a * d + d ^ 2))/(2 * c)), 1] Vector2d[] v = new Vector2d[2]; //int j = 0; double a = 0; double b = 0; double c = 0; double d = 0; foreach (CLine l in m.lines) { // compute tensor basis field Vector2d tvector = l.n2.p - l.n1.p; Vector2d cp = GetClosetPoint(l.n1.p, l.n2.p, _p, true); double vl = tvector.Length; double angle = Math.Atan(tvector.Y / tvector.X); Vector2d dp = cp - _p; double dd = dp.LengthSquared; double decay = Math.Exp(-df * dd); a += decay * vl * Math.Cos(2 * angle); b += decay * vl * Math.Sin(2 * angle); c += decay * vl * Math.Sin(2 * angle); d += decay * vl * -Math.Cos(2 * angle); } // T(p) = M*( [y^2-x^2] [-2xy] ) // ( [-2xy] [-(y^2-x^2)] ) foreach (CSing ns in m.sings) { Vector2d dp = _p - ns.n.p; double dd = dp.LengthSquared; double d1 = dp.Length; double decay; if (ns.bounded) { if (d1 < ns.decay) decay = 1; else decay = 0; } else { decay = Math.Exp(-df * dd); } switch (ns.type) { case SINGPOINTMODE.CENTRE:

64


a += decay * (dp.Y * dp.Y - dp.X * dp.X); b += decay * (-2 * dp.X * dp.Y); c += decay * (-2 * dp.X * dp.Y); d += decay * -(dp.Y * dp.Y - dp.X * dp.X); break; case SINGPOINTMODE.TRI: a += decay * dp.X; b += decay * -dp.Y; c += decay * -dp.Y; d += decay * -dp.X; break; case SINGPOINTMODE.NODE: a += decay * (dp.X * dp.X - dp.Y * dp.Y); b += decay * (2 * dp.X * dp.Y); c += decay * (2 * dp.X * dp.Y); d += decay * -(dp.X * dp.X - dp.Y * dp.Y); break; case SINGPOINTMODE.SADDLE: a += decay * (dp.X * dp.X - dp.Y * dp.Y); b += decay * (-2 * dp.X * dp.Y); c += decay * (-2 * dp.X * dp.Y); d += decay * -(dp.X * dp.X - dp.Y * dp.Y); break; case SINGPOINTMODE.WEDGE: a += decay * (dp.X); b += decay * (dp.Y); c += decay * (dp.Y); d += decay * (-dp.X); break; }

}

// compute eigenvectors Vector2d e1 = new Vector2d(); e1.X = -((-a + d + Math.Sqrt(a * a + 4 * b * c - 2 * a * d + d * d)) / (2 * c)); e1.Y = 1; Vector2d e2 = new Vector2d(); e2.X = -((-a + d - Math.Sqrt(a * a + 4 * b * c - 2 * a * d + d * d)) / (2 * c)); e2.Y = 1; e1.Normalize(); e2.Normalize(); v[1] = e1; v[0] = e2; return v; } // completes streamlines at "hanging" points // can sometimes unexpectedly reverse void Cleanup() { int vdir = 0; int rdir = 1; double _dt = dt; int[] stepcount = { 0, 0 }; stepcount[0] = (int)(dtest[0] / dt + 4); stepcount[1] = (int)(dtest[1] / dt + 4);

65


int slcount; if (sl0.Count > sl1.Count) slcount = sl0.Count; else slcount = sl1.Count; List<Vector2d> slref; List<Vector2d> slnew; Vector2d vp = new Vector2d(); Vector2d vpo = new Vector2d(); List<List<Vector2d>> slarray = new List<List<Vector2d>>(); for (int i = 0; i < slcount; i++) { for (vdir = 0; vdir < 2; vdir++) { for (int sldir = 0; sldir < 2; sldir++) { if (vdir == 0) { slarray = sl0; rdir = 1; _dt = dt; } else if (vdir == 1) { slarray = sl1; rdir = 0; _dt = dt; } if (i < slarray.Count) { slref = slarray[i]; slnew = new List<Vector2d>(); if (sldir == 1) { slref.Reverse(); _dt = -_dt; } vp = slref[slref.Count - 1]; vpo = slref[slref.Count - 2]; if (vp == slarray[Math.Abs(i - 1)][0]) continue; Vector2d[] ev = GetEigens(vp); Vector2d dir0 = vp - vpo; Vector2d dir1 = dir0; double dd1 = 0.0; double dd2 = 0.0; // check to see not near another point of reverse dir bool result = pointNear(vp, rdir, dt * 0.75); while (pointContained(vp) && result && slnew.Count < stepcount[vdir]) {

66


Vector2d.Dot(ref dir0, ref ev[vdir], out dd1); if (dd1 < 0.0) _dt = -_dt; //modified euler double _dt2 = _dt * 0.5; dir0 = ev[vdir]; Vector2d vt = Vector2d.Add(vp, (dir0 * _dt)); vp = Vector2d.Add(vp, (dir0 * _dt2)); Vector2d[] et = GetEigens(vt); Vector2d.Dot(ref dir1, ref et[vdir], out dd2); if (dd2 < 0.0) _dt2 = -_dt2; dir1 = et[vdir]; vp = Vector2d.Add(vp, (dir1 * _dt2)); if (pointContained(vp)) slnew.Add(vp); // check to see if close to other streamlines of same dir result = pointNear(vp, rdir, dt / 2); }

} }

ev = GetEigens(vp);

if (slnew.Count < stepcount[vdir]) slref.AddRange(slnew); //this does not seems to work else { int j = slref.Count - 1; int k = 0; while (j > 0 && j > slref.Count - stepcount[vdir]) { vp = slref[j]; if (pointNear(vp, rdir, dt * 0.75)) { slref.RemoveRange(j + 1, k); break; } j--; k++; } }

}

} }

B: Region Generation Code

public void GenFill()

67


{

Color blank = Color.Black; int colorcounter= 1; uint colorstart = 0xFF000000; Color fillcolor = new Color(); System.Drawing.Imaging.BitmapData data = bm.LockBits( new Rectangle(0, 0, bm.Width, bm.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppArgb); int[] bits = new int[data.Stride / 4 * data.Height]; System.Runtime.InteropServices.Marshal.Copy(data.Scan0, bits, 0, bits.Length); for (int j = 0; j < bm.Height; j++) { for (int i = 0; i < bm.Width; i++) { if (bits[i + j * data.Stride / 4] == blank.ToArgb()) { int colornum = (int)colorstart; //int colornum = 0; colornum += colorcounter; fillcolor = Color.FromArgb(colornum); //fillcolor = Color.Green; LinkedList<Point> check = new LinkedList<Point>(); int floodTo = fillcolor.ToArgb(); int floodFrom = blank.ToArgb(); bits[i + j * data.Stride / 4] = floodTo; int left = i; int right = i; int top = j; int bottom = j; colorcounter++; if (floodFrom != floodTo) { check.AddLast(new Point(i, j)); while (check.Count > 0) { Point cur = check.First.Value; check.RemoveFirst(); foreach (Point off in new Point[] { new Point(0, -1), new Point(0, 1), new Point(-1, 0), new Point(1, 0)}) { Point next = new Point(cur.X + off.X, cur.Y + off.Y); if (next.X >= 0 && next.Y >= 0 && next.X < data.Width && next.Y < data.Height) { if (bits[next.X + next.Y * data.Stride / 4] == floodFrom) { check.AddLast(next);

68


bits[next.X + next.Y * data.Stride / 4] = floodTo; if (next.X > right) right = next.X; if (next.X < left) left = next.X; if (next.Y > bottom) bottom = next.Y; if (next.Y < top) top = next.Y; } }

}

}

} if (left > 1 && right < rx - 2 && top > 1 && bottom < ry - 2) { regionmap rm = new regionmap(); Rectangle bound = new Rectangle(left, top, right - left, bottom - top); rm.bounds = bound; regions[colornum] = rm; } } }

}

System.Runtime.InteropServices.Marshal.Copy(bits, 0, data.Scan0, bits.Length); bm.UnlockBits(data); foreach (KeyValuePair<int, regionmap> v in regions) { regionmap rm = v.Value; rm.points = mooreNeighborTracing(v.Key, rm.bounds); } }

public List<Vector2d> mooreNeighborTracing(int fillc, Rectangle bound) {

bool inside = false; int pos = 0; int black = 888; List<Vector2d> points = new List<Vector2d>(); System.Drawing.Imaging.BitmapData data = bm.LockBits( new Rectangle(0, 0, bm.Width, bm.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppArgb); int fWidth = data.Stride / 4; int fHeight = data.Height;

69


int pWidth = fWidth + 2; int pHeight = fHeight + 2; int[] fImage = new int[fWidth * fHeight]; int[] bImage = new int[pWidth * pHeight]; int[] pImage = new int[pWidth * pHeight]; System.Runtime.InteropServices.Marshal.Copy(data.Scan0, fImage, 0, fImage.Length); bm.UnlockBits(data); for (int x = 0; x < pWidth; x++) { for (int y = 0; y < pHeight; y++) { if (x == 0 || y == 0 || x == fWidth + 1 || y == fHeight + 1) { pImage[x + y * (pWidth)] = -1; } else { pImage[x + y * (pWidth)] = fImage[x - 1 + (y - 1) * fWidth]; } } } for (int y = bound.Top - 1; y < bound.Bottom + 1; y++) { for (int x = bound.Left - 1; x < bound.Right + 1; x++) { pos = (x + 1) + (y + 1) * (pWidth); // Scan for BLACK pixel if (bImage[pos] == black && !inside) // Entering an already discovered border { inside = true; } else if (pImage[pos] == fillc && inside) // Already discovered border point { continue; } else if (pImage[pos] != fillc && inside) // Leaving a border { inside = false; } else if (pImage[pos] == fillc && !inside) // Undiscovered border point { bImage[pos] = black; // Mark the start pixel //int px = checkPosition % (data.Stride / 4); //int py = checkPosition / (data.Stride / 4); //points.Add(new Vector2d(x , y)); int checkLocationNr = 1; // The neighbor number of the location we want to check for a new border point int checkPosition; // The corresponding absolute array address of checkLocationNr

70


int newCheckLocationNr; // Variable that holds the neighborhood position we want to check if we find a new border at checkLocationNr int startPos = pos; // Set start position int counter = 0; // Counter is used for the jacobi stop criterion int counter2 = 0; // Counter2 is used to determine if the point we have discovered is one single point int bcounter = 0;

// Defines the neighborhood offset position from current position and the neighborhood // position we want to check next if we find a new border at checkLocationNr int[,] neighborhood = { {-1,7}, {-1-pWidth,7}, {-pWidth,1}, {1-pWidth,1}, {1,3}, {1+pWidth,3}, {pWidth,5}, {pWidth-1,5} }; double os = 2.0; Vector2d[] offset = { new Vector2d(-os, 0), new Vector2d(-os, -os), new Vector2d(0, -os), new Vector2d(os, -os), new Vector2d(os, 0), new Vector2d(os, os), new Vector2d(0, os), new Vector2d(-os, os) }; // Trace around the neighborhood while (true) { for (int i = 0; i < 8; i++) { if(pImage[pos+neighborhood[i, 0]] != fillc) bcounter++; } checkPosition = pos + neighborhood[checkLocationNr - 1, 0]; newCheckLocationNr = neighborhood[checkLocationNr - 1, 1]; if (pImage[checkPosition] == fillc) // Next border point found { if (checkPosition == startPos) { counter++; // Stopping criterion (jacob) if (newCheckLocationNr == 1 || counter >= 3) { // Close loop inside = true; // Since we are starting the search at were we first started we must set inside to true break; }

71


} checkLocationNr = newCheckLocationNr; // Update which neighborhood position we should check next pos = checkPosition; counter2 = 0; // Reset the counter that keeps track of how many neighbors we have visited bImage[checkPosition] = black; // Set the border pixel int px = checkPosition % (pWidth); int py = checkPosition / (pWidth); if (bcounter > 4) points.Add(new Vector2d(px-1 , py1)+offset[checkLocationNr-1]); bcounter = 0; } else { // Rotate clockwise in the neighborhood checkLocationNr = 1 + (checkLocationNr % 8); if (counter2 > 8) { // If counter2 is above 8 we have traced around the neighborhood and // therefor the border is a single black pixel and we can exit counter2 = 0; break; } else { counter2++; } } } } } } if (points.Count > 1) { points.Add(points[0]); double xavg = 0; double yavg = 0; foreach (Vector2d pt in points) { xavg += pt.X; yavg += pt.Y; } xavg = xavg / points.Count(); yavg = yavg / points.Count(); points.Insert(0, new Vector2d(xavg, yavg)); return points; } return null;

72


} } public class regionmap { public Rectangle bounds; public List<Vector2d> points; } }

73


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.