Graduate Architecture Thesis - 2016

Page 1

Greybox

Computational Toolmaking for Architectural Conceptualization

A Study Presented to the Faculty of NewSchool of Architecture & Design In partial fulfillment of the Requirements for the Degree of Master of Architecture

Ayden Kim, Ryan Conner and Ryan Stangl San Diego, 2016


Abstract This study explores how computational methodologies might augment conventional design processes in the context of architectural pre-design and schematic design. More specifically, it explores methodologies which show promise for enabling more intuitive and rapid data driven exploration within conceptual architectural design. This question is explored using an in depth review of available literature spanning multiple related disciplines including current computational methods, creativity support, cognitive augmentation, and design theory. This knowledge is put to practice in the creation of a prototype computational tool focused on the creation of urban context, display of social and environmental information within this context, illustration of related regulatory limitations, and comparison of design alternatives. Limited testing suggests that the prototype tool created for this project shows promise for increases in both productivity and intuitive interaction between the user and computer. Testing methodologies and sample size limit the concrete conclusions that can be drawn from this information. However, the meaning of these results and future areas for tool development and testing are explored within the conclusion.


Greybox

Computational Toolmaking for Architectural Conceptualization

A Study Presented to the Faculty of NewSchool of Architecture & Design In partial fulfillment of the Requirements for the Degree of Master of Architecture

Ayden Kim, Ryan Conner and Ryan Stangl San Diego, 2016


Copyright © 2016 by Ayden Kim Ryan Conner and Ryan Stangl NewSchool of Architecture & Design


Greybox

Computational Toolmaking for Architectural Conceptualization NewSchool of Architecture & Design Ayden Kim Ryan Conner Ryan Stangl

Kurt C. Hunker, Director of Graduate Programs Chair, Graduate Department of Architecture

Date

Vuslat Demircay, PhD. Building Science; M. Arch; B. Arch, Thesis Advisor

Date

Rajaa Issa, M.S. Computer Science and Engineering; M. Arch; B. Arch, Faculty Advisor

Date

Eric Farr, PhD Architecture and Urban Planning, M. Arch; B. Arch, Faculty Advisor

Date


Table Of Contents

Chapter 1 Introduction 1.1 Introduction 1.2 Context 1.3 Challenge Statement 1.4 Importance of Challenge 1.5 Background 1.6 Thesis Chapter 2 Research Studies 2.1 Theoretical Framework 2.2 Research Methodology 2.3 Literature Review Chapter 3 Research and Analysis 3 Computational and Conventional Processes 3.1 The Design Process Model 3.2 Cognitive Research and Conventional Architectural Process 3.3 Limitation of Computational Methodologies

2 3 11 12 16 23 26 30 31

36 43 51

Chapter 4 Design Process 4.1 Phase One 4.2 Phase Two 4.3 Phase Three 4.4 Phase Four 4.5 Phase Five

64 68 73 76 83

Chapter 5 Conclusions

90


References

100

Glossary Of Term

106

List Of Figures

114

Appendices A.1 A.2 A.3 A.4 A.5 A.6 A.7 A.8 A.9 A10

Existing Programs Building Typology and Case Study User Interface Process Chart Custom Components Questionnaires Presentation Fall 2015 Presentation Winter 2015 Presentation Spring 2016 Mid Term Presentation Spring 2016 Final

118 124 129 131 133 135 145 147 152 156


Chapter One : Introduction

1


1.1 Introduction The approach of this thesis is based on the core premise that architecture should not be concerned with revolution, but progress through evolution and adaptation. The authors find that the creation of highly functional and beautiful design is of primary concern, but share the frustrations of many others in achieving this goal of too little time. Rather than proposing a drastic alteration to the processes architecture and definition of the built environment that past theoretical projects have employed (i.e. futurism, metabolism), this work seeks to carefully navigate the existing constraints to find a more conservative and feasible solution to these frustrations. Over the past several decades computational tools have become increasingly useful for design development, construction documentation, project management, and operations. However, research and analysis of existing tools demonstrates that computational tools are not being used to their full potential in the conceptualization roles originally

envisioned. The authors speculate that opportunities exist to use computational tools in ways that better support conventional architectural processes in the early stages of design. Based on these ideas, this thesis is broadly concerned with exploring how architects can engage in computational exploration without being constrained by common issues associated with existing tools. Through an examination of emerging technologies across multiple disciplines and the ways in which those technologies might integrate with conventional architectural processes, the ultimate goal of this project is synthesis. Initial investigations into the topic suggest that the answer to this question might be found through analysis of firms that specialize in systematic design methodologies and developing computational programs to push the boundaries of design. In the spirit of engaging you, the reader, on this journey - the remainder of this chapter is a rationalized representation of preliminary research and concept definition.

2


1.2 Context Architects have used systematic design methods to create highly specific spaces for centuries. Andrea Palladio and Alvaro Siza are perhaps two of the most studied architects who used explicit rules-based design systems to create highly successful buildings (Larson et al. 2001). Work to recreate and utilize these systems has been undertaken by several students at MIT, with the goal of expanding this framework from educational analysis to the creation of new projects. Through this work, these students have demonstrated the feasibility of such an approach as a driver for high-quality design (Sass 2000). Although initially surprising, further study reveals these sentiments scattered throughout the history of architecture. Another prominent example is found in the writing of Adolf Loos, stating: “I have no need whatsoever to draw my designs. Good architecture, how something is to be built, can be written. One can write the Parthenon.� (Risselada et al. 2008, p. 175).

3

Although not an explicit endorsement of parametricism, likely due to writing nearly 100 years too early for the subject, Loos has articulated the idea that ultimately the Parthenon can be described using by the physical relationships between elements. Although this method may not give the reader a sense of the feeling or essence of the place, it comprises the physical relationships that create these feelings. Based on this research and on experience with the powerful abilities, the authors find the intersection of computation and architecture to be of interest. Although very powerful, there is a common frustration regarding the limitations of these tools especially in the context of use for pre-design optimization and evaluation. Research has shown that many architects are experiencing this same frustration, often using ill-suited optimization tools in the early stages of design. This led to the loose definition of a question that sparked the organization of this thesis project:


How can computational tools be applied to the early stages of architectural design? A useful place to begin this discussion comes from the work of Professor Paul Tesar from the University of North Carolina. In “Design Thinking� he proposes that we can build a model of the broad categories that should be addressed in architectural design from commonly encountered design justifications given by architectural professionals. In this article, four propositions are described as extremes to facilitate illustration of the architectural process but each are assumed to be important aspects of a successful project. They can generally be broken down into the following four categories of thought: Design is a relative act of finding architectural form: Given a defined set of relationships and desired outcomes, there is a singular best solution for each set of unique constraints associated with a site.

Design is an absolute act of generating architectural form: Design creates the situation by imposing order on a random world through universal concepts which create perfect forms abstract from external conditions. Design is a personal act of giving architectural form: Design depends on the artistic expression of an individual ego to materialize will, freedom, and passion. Design is an evolutionary act of transforming architectural form: Design is a socially meaningful expression of shared values in which historical forms are transformed to suit current needs to build upon the tested and proven wisdom of generations. These categories are compared through the use of a four axis chart (figure 1.1) that ranks the characteristics of design using several descriptive adjectives that help to define their relationship to one another more carefully. Through the creation of these categories and their comparison against

Figure 1.1 Architectural Justification Categories (Design Thinking 2015, p 80)

4


one another, it is possible to see that architectural design is the mediation between these extreme viewpoints, which are often espoused as the ultimate answer which should guide design from supporting architectural theorists. Although it may be said that all categories are ideal for a successful architectural design, it is also important to note that such a categorization can assist with preliminary assessments of what types of tools may be useful to achieve a successful design. Within these categorizations and thinking specifically about the investigation of computational methodologies and conventional architectural process, it is important to note that computational methodologies have the most applicability to the objective and specific aspects of architectural design. Furthermore, it is essential to note that although research into the absolute and universal aspects of architectural design have been investigated for years through systems such as pattern language created by Christopher Alexander. Although it is likely that someday we can broadly divide this graph into aspects best addressed through computational and

5

cognitive methods (with computational on the right and cognitive on the left), in the context of this thesis it is important to focus on that which is clearly achievable- specifically the relative. To demonstrate what such a dual approach may look like it is important to concentrate on the application of computational methodologies in the specific and concrete area that is concerned with the unique aspects of each site and the ways in which these elements inform architectural design. From this, we can understand that the proposed tool is broadly concerned with the intersection of computation and conventional process concerning environmental analysis, regulatory compliance, and other issues which constrain the architectural project depending upon its unique location. To better understand the nature of this question and to specify the direction its exploration and eventual proposal should take, a thorough examination of conventional architectural processes was necessary. Although a variety of processes exist across firms and individual Architects,


the Architects Handbook of Professional Practices outlines the stages and scope of work described in AIA contract documents. The following is a short description of the duties and possible deliverables of the architect for each stage of both the core and additional services that may be offered:

Design Development- During design development the schematic design is further detailed to include mechanical, electrical, plumbing, structural, and architectural details. Final documents often include: “floor plans, sections, and elevations with full dimensions... typically (including) door and window details and outline material specs”

Pre-Design- Although considered an additional service, most projects could benefit from design consultation in the stages prior to the start of schematic design. Duties may include:

(American, I. O. A. 2013 p 954)

master planning, project definition, program

Construction Documentation- Design documents are standardized for use in the field and further developed to contain specifications for construction details and materials. Final documents often include:

management, and other related activities”

“drawings that include all pertinent

(American, I. O. A. 2013 p 954)

information required for the contractor to

“goals and visioning sessions, scenario planning, strategic planning, campus or

price and build the project” (American, I. O. A.

Schematic Design- The first core service, schematic design entails the definition of project goals and requirements, the creation of a program to represent these goals and requirements, and the development of conceptual representations which illustrate spatial relationships, scale, and form. Often final documents include:

2013 p 954)

Bid or Negotiation- Responsibilities can vary widely, but generally during this phase the architect helps the owner to evaluate bids and select the winning one as well as negotiating price and signing the final contract. (American, I.O.A. 2013 p 954)

“floor plan(s), sections, an elevation, and other illustrative materials; computer images, renderings or models” (American, I. O. A. 2013 p 954)

6


Construction- Depending on prior agreements, contract administration services may be required. The architects primary “responsibility during this phase is to help the contractor to build the project as specified in the construction documents as approved by the owner.� If necessary, additional drawings may be produced to clarify questions with the existing documentation or to accommodate change orders. (American, I. O. A. 2013 p 954) Post-Construction- The postconstruction phase is an additional service in which the architects responsibilities may include: post-occupancy evaluation, building commissioning, maintenance scheduling, space planning, renovations, energy analysis and monitoring, disaster planning, tenant improvements, forensic analysis, code analysis, or space scheduling. (American, I. O. A. 2013 p 954)

Much like Lawson’s discussion of 1965 RIBA documents (1997, p 34), we can see that much of the AIA definition of the architectural project is more concerned with outcomes than process. Therefore, at this point in

7

the examination it becomes necessary to stipulate that whatever methods are employed during architectural design, their value can be measured by contribution to the production of these staged deliverables. Beyond this, it can be said that superior processes can be identified as producing these deliverables at both a greater speed and at greater quality. Through 3d modeling, environmental analysis, rendering, CAD systems, and numerous or software programs, architects have a variety of helpful and well-developed computational tools for use in the design development, construction documentation, bidding, construction, and post-construction phases. A brief history and evaluation of these tools is presented in Chapter 1, Section 4. Development of more powerful, intuitive, and useful computational tools continues in this area and the development of a prototype to improve these services in any meaningful way is beyond the scope of this project. Additionally, although it is theorized that generative exploration within a parametric framework could


be used throughout the architectural design process, the complexity of applying these theories to detailed design becomes problematic. Even experienced professionals in the field of parametric design have published critiques of the over-application of parametric methodologies (Smith 2007, p. 2), which have been supported by independent examination (Davis 2013, p. 43). Therefore, this thesis further constrains its focus to the application of computational methods within the schematic design phase. Application for early stage design is perhaps the most elusive aspect of computational integration since it relies heavily on intuition and the balancing of illdefined parameters with the guidance of a design question that hasn’t been fully clarified. In taking this approach, the proposed tool will be tested in the most demanding conditions. Preliminary research into the tasks involved in this phase reveals common elements with a standardized order. Figure 1.2 shows a comparison between the tasks outlined by one architect and those contained in an AIA sample billing document with the terms standardized to those used by

the AIA. Within these general tasks, a large variation of detail and emphasis can arise due to the nature of the project or processes employed in by the individual architect or firm. The tools discussed in the previous paragraph are also used in the development of design alternatives during the schematic design phase, which is shown in greater detail in the case study analysis of conceptual processes in Chapter 3, Section 3. Therefore, the most opportunity for a computational tool to support architectural processes arises in the evaluation of client information and development of preliminary designs. Despite the advantages that computational tools may provide, it is important to recognize the various strengths of cognition and computational processing and to employ each in the most efficient ways possible. Architects have long understood that conceptual sketching and physical modeling is essential to the process, and research supports its role in developing a greater understanding of the relationships and opportunities in each project.

James Cummings Evaluate Client Information Develop Preliminary Design Develop Alternative Develop Selected Design Cost Estimate

AIA Sample (p969) Evaluate Client Information Develop Preliminary Design Develop Alternatives Develop Selected Design Cost Estimate Consultant Coordination Review Meetings Figure 1.2 Comparison between AIA and James Cummings of architecture development process

8


Therefore, it is important to emphasize its necessity in the design process and to guide the development of early stage computational tools to support and augment this process. Evaluation of conventional architectural processes and the supporting role that computational tools currently play serves to specifically locate the area for greatest impact of a new prototype. The area of opportunity that exists within the evaluation of project information and development of preliminary designs is conceptualization. The process of conceptualization is defined by numerous approaches which are difficult or impossible to standardize or objectively evaluate. Further details on the research reviewed are included in Chapter 3, Section 1, but generally, a set of processes are used which include sketching and modeling in both physical and digital media. These methods vary based on applicability to the particular project and the level of inspiration and clarity that the architect experiences. If the architect has a clear initial concept for the project, development of the design and alternatives may begin with little

9

preliminary analysis. However, with the lack of a clear concept, exploratory methods are necessary to proceed. (Parthenios 2005) Despite the level of variability present in the methods used for architectural conceptualization, a common set of elements can be identified that occur either consciously or based on intuition. The Handbook for Professional Practice defines these common elements with the terms generative logic, iteration, evaluation, and selection. Of these elements, generative logic is most emphasized in the language of the Handbook with the authors describing its importance the primary driver for the creation of iterations, the identification of evaluation criteria, and as a shared mission statement to organize and focus efforts of the design team. These elements are described as occurring in a cyclical manner as illustrated in figure 1.3, with the development of design alternatives often leading to new ways of defining and refining the generative logic that defines the project goals. (American, I. O. A. 2013, p 660)


Generative logic- “The “vision” or the “concept” that guides and directs the design process. In short, it is a core set of architectural values that result from the established goals and prioritized analysis. These values are used to judge subsequent alternatives and determine which is superior and worthy of further refinement.” (Hayes et al 2013, p. 660) Iteration- “The act of solving and resolving a Figure 1.3 Illustration of the conceptualization process (Hayes et al, 2013)

problem (through which) designers explore, develop, and document their concepts.” This

The language used to describe this design process in the AIA Handbook is similar to the Markus/ Maver model of design decision making which includes analysis, synthesis, appraisal, and decision (Lawson 1997, p 35). Although only a few process models are detailed in this thesis, research suggests that a majority of these models contain similar descriptions.

models and how they compare to realworld design processes is necessary for the useful application of their concepts. Further discussion can be found in Chapter 3.

process is important because the full range of possible solutions can only be understood by generation and evaluation of alternatives. (Hayes et al 2013, p. 660) Evaluation- A process in which iterations are “judged relative to the established goals and generative logic.” (Hayes et al 2013, p. 660) Selection- A process in which the best design comes forth as “most consistent with the goals

Furthermore, it is important to note that although these models are represented as a somewhat linear series of steps, the reality is that architectural designers often jump from one step to another and from one level of detail to another. Further examination of these

and logic created for the project... (resulting) in greater unity of expression and purpose. At its best, this process yields a result that appears simple, almost inevitable - as if it couldn’t have been any other way. ” (Hayes et al 2013, p. 660)

10


1.3 Challenge Statement Based on accurate location of point in the design process at which computational tools can have the greatest effect and the standardized representation of the process of conceptualization, a challenge statement can be constructed which defines the overarching goals and direction of this thesis project:

Over the past several decades computational tools have become increasingly useful for design development, construction documentation, project management, and operations. However, research and analysis of existing tools demonstrates that computational tools are not being used to their full potential in the conceptualization roles originally envisioned. How can computational tools be implemented in the early stages of design that align with the cyclical processes that define architectural conceptualization and support the exploration and organization of generative logic and iteration?

11


1.4 Importance of Challenge Design Optimization is characterized by the definition of a design problem through the formulation of relationships, followed by convergence towards a single ideal solution driven by analysis (Mattson 2014). This approach is generally applied to engineering projects and is the primary focus of much of the current computational architecture. Its practice is beneficial to architectural design because it requires quantitative analysis and encourages the adjustment of details such as overhang depth and window placement to improve operational performance. However, the need to precisely define the generative logic and spatial relationships of a project guides its application towards late stages of design development, where significant changes are often an impractical response to findings. If applied in the conceptual stages of a project, its use makes it challenging and time-consuming to refine and redefine the generative logic of the project.

Design Exploration is the process of approaching a design problem with the assumption that the optimal design solution is initially unknown and uncharacterized, with clarification of the optimal solution emerging from a convergent/divergent cycle of exploration and optimization (Mattson 2014). This approach assumes that the generative logic of the project will require refinement and redefinition, and that iteration and exploration is necessary to understand how refinement and redefinition will happen. This approach is used for highly complex engineering and aligns with the processes of architectural conceptualization. This idea of design exploration provides a precise definition of the process of conceptualization described by the AIA as containing elements of generative logic, iteration, evaluation, and selection. The process of design exploration can be further defined through an examination of the

12


Bijarke Ingels Group (BIG), who applies a methodical approach to conceptualization to arrive at highly successful expressions of both form and function. In the book Hot to Cold: An Odyssey of Architectural Adaptation Ingels defines the firms methodology as Information Driven Design, which is “the idea that our design decisions are always informed by specific information� (Ingels p. 655). Rigorous and evolving definitions of the generative logic of a project are created through site analysis and program development. This generative logic is used to evaluate conceptual models and sketches, which often number in the high double-digits for each project. As the generative logic is refined through iteration and evaluation, conceptual models and sketches begin to converge on a single solution which is then developed and optimized. Further exploration of the processes employed by BIG is contained in Chapter 3, Section 2. This approach suggests that the processes employed in architecture are of primary importance in determining the quality of design outcomes. This importance of process is well

13

illustrated by an example from the impact of computation on the game of chess. When supercomputers were first introduced to the general public, matches with chess players were organized to demonstrate and evaluate their capabilities. The first defeat of a chess champion by a computer occurred in a 1997 match between Deep Blue and Gary Kasparov. Kasparov went on to invent advanced chess, a variant of the game in which humans and computers form teams to elevate the level of strategic play to near perfection. This development is of particular importance because play takes advantage of the relative strengths of computational processing and cognition, and the outcomes in these games are largely dependent on how well the human and computer collaborate. In 2010 Gary Kasparov, inventor of advanced chess, wrote about the outcome of a tournament where the winner seemed inevitable due to many grandmasters appearing paired with advanced chess playing computers: The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players


using three computers at the same time. their computers to look very deeply into

and generative design provided by Krish (2011) into a single term that most succinctly captures the design intent of

positions effectively counteracted the

this thesis.

Their skill at manipulating and “coaching”

superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process. (Kasparov 2010).

This demonstrates that in a game where predicting the effect of a move in the context of a match is practically impossible, superior processes had a large, if not disproportionate, effect on the outcome. Further research will be required to support a connection to architecture, but the authors speculate that the processes employed in the definition and refinement of generative logic and conceptualization has a similarly disproportionate effect on outcomes. Essentially, a problem well stated finds its solution. To explore the role the interaction between human and computer the authors have synthesized definitions of design exploration provided by Mattson (2014)

Generative exploration is the process of approaching a design problem with the assumption that the optimal design solution is initially unknown and uncharacterized, with clarification of the optimal solution emerging from a convergent/divergent cycle of exploration and optimization that translates computational energy into creative energy. Although all design efforts are, to some extent, generative - this definition specifies the role that computation can play in supporting architectural conceptualization. In this context, generative exploration methodologies provide an attractive benefit beyond those already outlined for design exploration. By leveraging the ability of computational processing to rapidly generate and analyze a large number of iterations, generative exploration can augment conventional methods of design exploration through more systematic methods of iteration, evaluation, and selection. Although not

Generative Design- “a way of translating computational energy into creative energy” (Krish 2011)

14


specifically referencing computational methods, The Handbook recognizes the importance of this type of workflow, stating: “Design is a unique analytical process that involves two fundamental procedures: understanding a project’s multiple parameters and synthesizing these parameters into a holistic strategy. While complex, the design process is not

creative ideas will arise. Through leveraging computational processing to generate these combinations, the limits of human working memory can be surpassed. Generative exploration is a defined approach that mirrors the first stage of intuition that often leads to sudden insights in the design process, ultimately acting as a tool for informed creativity support. Ahmad Fakhra defines these stages as

impenetrable. It is a rigorous, methodical process of inquiry and invention.” (American,

“the guiding stage where coherence or

I. O. A. 2013 p 657)

structure is unconsciously reorganized and used, and the integrative stage

Further evidence for this approach can also be found in the emerging field of research on creativity support. The most basic concept that supports the utilization of generative exploration methods is that “Creative ideas emerge from novel juxtaposition of concepts in working memory in the context of a creative task. Therefore, within the limits of human working memory, the greater the variety of concepts one considers, the greater is the probability that creative ideas will occur” (Czerwinski 2006, p 21). In the context of design exploration, this statement proposes that the more combinations explored, the more likely

15

where coherence surfaces to the level of consciousness” (2012, p 104).


1.5 Background This thesis is concerned with the creation of a generative exploration tool for creativity support in the conceptual stages of architectural design. In this light it is important to explore the research on creativity support further to provide context for understanding why successful computational tools in architecture don’t work for conceptualization. Additionally, exploration of creativity support can serve to limit the scope of this project by identifying some of the most important aspects for appropriately augmenting conceptualization. The impact of tools on the architectural process is broad and exciting, but for the purpose of this discussion it is most useful to examine the history and effects of construction documentation (CD) standardization, computer aided drafting (CAD), and building information management (BIM). These provide examples of the ways that computation has already augmented the design process by

focusing on production and project management phases. An examination of current difficulties in the use of parametric modeling serves to support the thesis statement and to limit the scope of the project. Additionally, a brief examination of parallel methodologies and processes from software engineering will begin to explore the untapped potential for the integration of computational processes within architectural design.

Creativity To begin this discussion, it is important to acknowledge that creativity is a complex and multifaceted issue which isn’t yet fully understood. While enough isn’t yet understood about creativity to assist the process, the creation of new tools can support both informed creativity as demonstrated in the work of BIG and ensure tools don’t disrupt the creative process (Czerwinski 2006, p 14). Additionally, the cognitive mechanisms involved in the emergence of creative solutions

16


can be identified and the processes of creative support tools designed to work within these boundaries. The four cognitive mechanisms involved in the production of creative solutions include combination, association, expansion, and emergence.

On the subject of ensuring the creation of computational tools doesn’t interfere with the creative process a few basic strategies have been suggested including: Support for hypothesis formation Speedier evaluation of alternatives

Conceptual combination- merging of two or more existing ideas Conceptual association- development of new relationships between existing ideas Conceptual expansion- changing the properties and use of existing ideas Conceptual emergence- discovery as a result of examining existing ideas (Fakhra 2012, p 71)

17

Although no direct path is apparent to the computational support of conceptual expansion and emergence, the methods of combination and association are both utilized by an existing tool for architectural exploration. This commonly used tool is the transformation matrix. Often utilized for the two-dimensional exploration of formal qualities, the matrix generates iterations through the combination of transformation and massing from which new relationships can be derived. In his doctoral dissertation, Fakhra suggests a similar strategy by stating that digital interfaces can “be used as exploratory tools by manipulating project-related information in different ways using creativity-relevant techniques that assist designers in coming up with creative ideas or jump starting their creative processes” (2012, p 84).

Improved understanding through visualization (Czerwinski 2006, p 25)

The suggestions closely align with the conceptualization of generative logic from The Handbook and the benefits of computational processing to evaluate and visualize complex data. More in-depth exploration of the research relating to the use of computational tools as a method for creativity support is undertaken in Chapter 3, Section 3. From the research relating to creativity support one final conclusion is available that is relevant to the discussion at this point. “Elaborated 3D CAD systems are found effective in helping architects to compose solutions but obstructive to their creative exploration” (Czerwinski 2006, p 27). This conclusion is important for understanding why CAD, 3D modeling, and BIM has had such a substantial impact on the fields of architecture and


construction, but the application of current methods to conceptualization is harmful.

Construction Documentation First and foremost CD’s are a vital, creative, even exquisite instrument of communication (Eastman 2008, p. 193). They outline key interactions, responsibilities, and details that provide definition of the building form and construction methodology. When architects occupied the role of master builder, standardization of CD representation wasn’t overly important. However, through the development of a standardized method for representation, construction documentation became a universal language for communication. According to the BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors the author Charles Eastman points out, “It is known that BIM coordination improves communication, which decreases construction cost and time, thus reducing risk (2008, p. 302). This essential tool enabled better collaboration between architects

and allowed designers to partially remove themselves from the construction process. This separation of responsibilities facilitated the completion of a larger number of designs, expanding the influence of architects within the built environment (Eastman 2008, p. 302).

Computer Aided Drafting CAD programs were first conceived with the goal of reducing time spent in construction documentation, specifically focused on easing the process of design revision (Eastman 2008, p. 63). Initially, these programs were difficult to use and required learning a proprietary input language for interaction. However, over time the interface has become more intuitive, which the authors speculate has encouraged widespread use within the architectural profession. Today CAD plays an integral part of construction documentation process. The use of CAD into most conventional architecture processes has resulted in further standardization of representation and data organization, streamlining of the CD process and the ability of designers to relatively quickly

18


revise documentation during the construction process (Eatman 2008, p. 281). The incorporation of referenced specifications enhances the process by minimizing many mistakes involving human error and maximizing the use of time.

Building Information Modeling Initial investigations into BIM technology was driven by a desire to reduce the redundancy of hardcopy drawing and to eliminate their failure to represent building renovations over time (Eastman 1976). Additionally, 3d modeling of a linked model from multiple disciplines decreased the discovery of conflicting elements during construction. Early innovators in the exploration of this idea included Charles Eastman. His work on Building Description System (BDS) and Graphical Language for Interactive Design (GLIDE) in the 1970’s laid the foundation for the development of the modern BIM environment. Despite the advantages of the BIM systems developed throughout the 1980’s, widespread use wasn’t achieved until the release of Revit.

19

This software was revolutionary in how it enabled architects to interact with BIM information, lowering the barrier to entry and allowing for management and visualization of in a way that easily integrated into the architectural workflow. In this way, BIM technology also enabled the realization of more complex designs through integration with manufacturing processes. Enni Lane, states that synchronization of different aspects of the building process has been successful, mainly due to interoperability and synchronization that BIM takes advantage of to facilitation collaboration (Eastman 2008, p. 502). Despite the benefits of BIM, it has shifted some of the responsibilities traditionally shouldered by contractors to architects and engineers. Even though standardized workflows resulting from the development of CD and CAAD have enhanced the design process in architecture, they are focused on representation and communication of design intent. BIM focuses on detailed design development, and as a result is mainly useful in the late stages of the process.


Integrated Product Delivery The history of BIM provides important context regarding the impact of process on the ability of architects to influence design. The MacLeamy curve (Figure 1.4) is familiar to many architects and has been the theoretical foundation for front loading the design process through Integrated Product Delivery (IPD). This idea is founded on the observation that in late stages of the design process, cost of design changes go up while the ability of decisions to impact cost goes down. Through shifting efforts to be concentrated at an earlier stage of the process, the impact of design choices becomes much greater (Davis 2013, p 35).

Theoretically, IPD is beneficial for architects despite the legal complexities that it introduces to the project because it increases the resulting quality of design outcomes and adds value to each project. However, one common critique of this attempt to front load the design process is that rather than shifting the bulk of design efforts to an earlier stage, front loading has actually served to extend the peak of this curve - essentially creating more work for the same pay and liability (Regnier 2015). Based on this criticism, additional methods of addressing the issues of the MacLeamy curve should be explored. 1 Ability to impact cost and functional capabilities 2 Cast of design changes 3 Tradtional design process 4 Preferred design process PD: Pre-design SD: Schematic design DD: Construction documentation PR: Procurement CA: Construction Administration OP: Operation Figure 1.4 MacLeamy’s Curve (Davis 2013)

20


Limitations of Parametric Design Both IPD and parametric design attempt to address the same fundamental issue of increasing cost for project change over time. Parametric modeling was initially conceived to reduce the cost of design changes through the introduction of flexibility (Davis 2013, p. 36). However, in practice many find the flexibility of parametric models lacking. Often they don’t reduce the cost of changes, especially in the later stages of design (Davis 2013). Additional criticisms of parametric modeling specifically related to use in the early stages and conceptualization include: Parametric scripting requires that design goals and prioritization is solidified, which doesn’t support the experimentation needed for conceptualization (Davis 2013, p. 46) Conceptualization is about enabling creative flow, which requires rapid iteration in an

These issues reinforce the reasons that parametric and computational design tools are currently successful for use in the stages of design development onward, but find little traction in the early stages of design conceptualization. Current tools succeed at design optimization and representation, which have requirements that are distinct from those for design exploration.

Boehm’s Curve Decades previous to MacLeamy the Boehm curve was conceived (figure 1.5), which demonstrates virtually the same relationship between cost of change and time in software development. Similar efforts were undertaken to frontload projects, and waterfall project management strategies were implemented that closely mirror those used in architecture and engineering (Davis 2013, p. 55).

intuitive manner. Currently, computational tools lack these characteristics Parametric workflows require adaptation of the architectural process to align with their limitations, which constrains the creative expression of ideas

21

At nearly the same time the MacLeamy curve was becoming popularized within architecture, Kent Beck was advocating for agile programming methodologies to flatten the Boehm curve through a focus on


“better languages, better database technology, better programming practices, better environments and tools, new notations� (1999, p. 27)

Essentially these methodologies focus on the creation of tools and processes that allow for rapid iteration, testing, and redefinition of goals throughout the project. This framework seems to have been successful, with agile programming methods resulting in the successful completion of 28% more projects in 2012 than the conventional waterfall method (Standish Group 2012, p. 25).

While it is possible that architectural processes could be modified in similar ways, that question lies beyond the scope of this project and in the opinion of the authors is best explored through experimentation in individual firms. However, the creation of tools that enable the exploration of new processes in architecture while augmenting conventional methods is the focus of this thesis. Therefore, it is important to acknowledge the correlations that can between drawn between software engineering and architecture and the impact of this research on the conceptualization of this project.

Analysis Based on this summary and analysis of IPD, parametric design, and software engineering methodologies as related to the MacLeamy and Boehm curves, we find that there are two distinct approaches to addressing issues of cost over time. The method used in agile programming is to enable rapid and low-cost change throughout the project through drastic revisions to workflows and tools. The second approach is that utilized by IPD to frontload the process as a way to minimize design changes in later stages of the process. The authors speculate that hasn’t worked because minor tweaks are inevitable in the later stages of design. Therefore, the authors conclude that within conventional processes of architecture the MacLeamy curve actually demonstrates a need to resolve significant design changes in the early stages without expending much effort. Through leveraging computational resources in conceptualization to support more informed and methodical design decisions, it may be possible to address the issue of cost over time in a more efficient way.

Figure 1.5 Boehms Curve (Davis 2013)

22


1.6 Thesis Through analysis of research efforts, applied theory in architecture and software engineering, and a focus on architecture specific interactions, it is feasible to create a prototype interface that uses generative exploration to enable more intuitive and productive interaction between architect and computer during the schematic design stage.

Based on research and analysis presented in this chapter and described in further detail in Chapter 3, the authors have established the pre-design and schematic design phases as an ideal point of intervention for a new computational tool. Additionally, this research demonstrates the possible benefits of this tool as a starting point for creative inspiration based on a rigorous method of generation and evaluation. Finally, a number of factors have been identified which limits the use of computational tools in conceptual design including a lack of flexibility, a high time commitment, and non-intuitive interfaces. Due to these factors, this thesis proposes the creation of a prototype computational tool that seeks to achieve three core goals:

23

Generative exploration to support rigorous iteration and evaluation of design alternatives as inspiration for conventional methods of conceptualization Interface focuses on creative support through intuitive interaction Variable parametric scripting to enable the rapid redefinition and refinement of generative logic


This approach is our own, but is supported by the theoretical frameworks outlined in numerous successful Ph.D. projects on the subject. Perhaps the most striking support for the authors speculative direction comes from the work of Estkowski who states that: “In spite of many approaches for creating a generative design system, it seems that its implementation to common architectural practice failed. There is a lack of a digital creative design assistance, which would take advantage of the state-of-theart digital technology and which at the same time would fit well with the actual design practice. Such assistance could be especially useful in the early stages of the design process, where an architect explores potentials of a building site, testing different building variants and adjusting assumed design objectives. Although a significant change has taken place in some areas of commercial Computer Aided Architectural Design (CAAD) systems... the creative design systems are still only a subject of research. It seems that they lack a better adjustment to the specific nature of architectural practice.� (2013, p. 2)

24


Chapter Two : Research Studies

25


2.1 Theoretical Framework In the development of the proposed tool, this thesis looks broadly to investigate and develop the interaction between architecture and computation. To avoid the pitfalls of other approaches to integrating advanced computational methods with architecture, this thesis focuses on an approach that emphasizes the architectural process and adapts to existing paradigms, as opposed to the creation of novel workflows. In the development of a tool meant to enable the integration of generative exploration methods with the conventional architectural process there are a large number of factors to be considered.

Ideally such a tool would incorporate many aspects including compliance with ADA and IBC regulations, fire code, the effect of structure on building form, daylighting, glare, thermal gain, and many more. Additionally, it would also be useful in all stages of the architectural design process and interaction with the tool would adapt to coincide with the specific requirements for each stage of design development.. However, the limited time frame of this thesis and the authors desire to create a functional prototype necessitates in-depth exploration of a few concepts deemed most useful for successful completion.

Based on the goals identified in the thesis statement, three broad subjects have been identified which are critical for examination: Process Generative exploration to support rigorous iteration and evaluation of design alternatives as inspiration for conventional methods of conceptualization Information Variable parametric scripting to enable the rapid redefinition and refinement of generative logic Communication Interface focuses on creative support through intuitive interaction

Figure 2.1 Theoretical Framework Diagram

26


Through a focus on these areas of exploration, this project will demonstrate the success or failure of the proposed framework that differentiates the proposed computational tool from existing computational tools. Within these ideas of process, information and communication, specific areas for research have been identified including: Research into conventional architectural processes will help to ensure that the design of this computational tool will not require architects break from successful conceptualization practices; this helps to identify ideal areas and methods for constructive intervention of the tool. Research relating to creativity support helps to identify best practices on how a computational tool can enhance, or at least not disrupt, the creative aspects of architectural conceptualization; this helps to identify methods of interface and interaction, as well as the ideal areas for constructive intervention of the tool. Research into systematic architectural design will help to

27

identify how the processes employed by firms that use systematic design compare to those that use conventional design; this will help to identify areas of overlap where systematic methods can be utilized within the conventional process. Research into design constraints will identify successful geometric relationships created in existing projects; this will assist in establishing parametric definitions for use within the computational tool. Research into the underlying processes used in generation and analysis within existing software will inform the ways in which our parametric scripts can become variably arranged; understanding the software that we work within will enable more efficient use of computational resources. Research into existing methods of architectural representation will drive the design of the user interface (UI) to ensure communication happens in as familiar of a manner as possible so as not to disrupt creative ideation.


Research into methods of UI will also drive the design of interface and interaction with the tool to ensure communication happens in as intuitive of a manner as possible so as not to disrupt creative ideation. Through a brief definition and exploration of each categories and the contained terms, their roles towards developing a cohesive theoretical framework will begin to become more clear. Conventional Architectural ProcessThe methods by which architecture practices move from design concept to the documentation and construction phases as defined by the AIA. This includes descriptions of these processes at multiple scales from general categories to task specific definitions. This area of study will focus on the processes involved in early stage concept development, focusing on the elements of design constraint analysis and conceptualization within schematic design. Special attention will be paid to areas of convergence and divergence in a comparison between the conceptualization methods used by conventional architectural

firms, those used by systematic architectural firms, and those used by firms who implement advanced computational practices. This research will inform development of the visual representation and interactive interface of the proposed tool, as well as identifying the common aspects of the design process to prioritize development of initial modules created within the tool. Systematic Architectural DesignThe method by which design is approached as a problem of constraints and requirements. Although often associated with developer-oriented projects in the field of architecture, many prominent designers use a systematic approach to clarify design intent and prioritize the optimization of certain goals that are more relevant to the project. Since the tool seeks to take advantage of processing and explore user interface relationships, the analysis of systematic design approaches can help to inform the creation of this relationship as well as assisting in the identification of rules or constraints for parametric definition within the tool.

28


Creativity Support- This area of research is concerned with understanding the cognitive processes that contribute to creative ideation within architectural conceptualization. Areas of special interest include the ways in which creativity works with information to arrive at highly functional and inspired final design. Design Constraints- This area of study is concerned with the understanding and subsequent parametric representation of architectural constraints. These constraints include set limitations such as lot size and setbacks, but they also include other more nuanced aspects as well. Concerns such as the desire to flow with site topography, the desire for certain lighting and shading conditions, and preference for geometric or organic forms may be addressed in an attempt to codify methods to translate intangible goals into tangible relationships of form. Computational Methods of Generation and Analysis- Focusing on research into the underlying processes at play in visual and textbased programming, this area of

29

exploration will assist in the process of creating the prototype computational tool. A better understanding of how the computer processes input will inform the ability to streamline the parametric definition of modules, allow for the implementation of variable parametric scripting organizations, and enable the implementation of code within the UI as designed. User Interface- One major factor identified as important for creativity support in a computational tool is the facilitation of creative flow (NSF p 14). Much like graphic design, ideal representations of UI allow for easy and direct access to the represented information. To enable creativity support the UI will need to be as intuitive as possible. Architectural Representation- Similar to the importance of UI, architectural representation will serve to support creative flow through intuitive interaction. Since this tool is meant for use by architects, commonly used graphic conventions in architectural representation will be applied wherever possible to facilitate easy understanding of the information.


2.2 Research Methodology Each of these areas of study required the application of a specific research methodology that is best suited to the type of information desired. Case Study- This methodology applied to examination of architectural processes, systematic design approaches, design constraints, architectural representation, and UI. Through comparative analysis the benefits and limitations of each examined methodology or design will become more readily apparent and help to guide decision-making throughout the process. Experimental Simulation- This methodology applied to the creation of the tool and to the research and exploration of identified design constraints, where a thorough understanding of each of the selected constraints will be necessary to enable simulation. Use of the prototype tool within the experimental simulation methodology will be used to evaluate

the success of methods of analysis and filtration, as well as success in the creation of a method for parametric definition that allows for rapid redefinition of the underlying generative logic. Survey/Informational InterviewThis methodology applied to gather initial requirements for interface design and to determine the most important aspects to be parametrically defined for use in conceptualization. Additionally, this methodology will be applied to test the proposed methods of interface and representation since observational studies of users has been proposed as the best evaluation methodology for creativity support tools (NSF p 36). This methodology will also be used to evaluate the final success of the prototype tool. This is necessary because as opposed to using efficiency or productivity as a measure of success for creativity support tools, subjective experiences of flow and design quality need to evaluated (NSF p 72).

30


2.3 Literature Review Since the span of our research covers a variety of topics in disparate areas and the focus of this thesis is the synthesis of examined information into a cohesive whole, this literature review is thematically organized based on the minor subject divisions contained in the theoretical framework. Sources are examined only against related topics, but since some of the research is crossdisciplinary it appears in multiple sections.

Conventional Architecture Process Research on the conventional architectural process is an important subject to the development of this thesis that requires the drawing from multiple sources to understand both the idealized models that drive our conception of practice and the daily reality of architectural design in the real world. In this area, the Architect’s Handbook of Professional Practice serves as a trusted source for the definition of many terms and ideas (Hayes 2013). The legal documents

31

from which many of the terms are derived are widely used in the profession. As such, this serves as the baseline for framing the rest of our research to allow for standardization of the many terms likely to be used by subjects that are interviewed. Additionally, Conceptual Design Tools for Architects has been an invaluable reference for initial explorations into the conceptual process (Parthenios 2005). Although somewhat outdated detailed examples are provided of case studies on the conceptual process from practicing architects and architecture students, a survey of over 200 architectural practitioners concerning conceptualization processes is provided, and examples are given of testing methodologies for conceptualization tools. Due to the rapidly changing nature of computational tools, much of the information contained in regards to these methods is no longer useful but information related to conventional processes still holds relevant.


Systematic Architectural Design Research regarding systematic methods of architectural design is possibly the most difficult subject utilized in our study. The architects and firms that use these methodologies tend not to publish detailed information about their processes, likely due to the rather limited audience for such work. However, through the examination of multiple sources by the same author or authors, a picture can be created that more fully describes the processes used. The Bijarke Ingels Group has been of particular interest in this process and many videos and texts have been examined relating to their work and presentations, but the book HOT TO COLD: An Odyssey of Architectural Adaptation has been the most complete and consistently useful reference for their work (Ingels 2015). Although the success of their architectural projects is widely debated by theorists the authors believe that the popularity, performativity, and subjective appeal makes it worthwhile for examination. Much of the resources on this subject fall into a similar area with the other main source being The Modern Modular: Prefab Houses of Resolution 4 Architecture (Tanney and Luntz 2013).

Although the appropriateness of the design methodologies employed by Resolution 4 Architecture is somewhat controversial, we find benefit in an examination of their work due to a remarkably consistent logic applied throughout. If measured by the terms that they state as important, the results of their systematic approach is certainly a success.

Creativity Support Research on creativity support is somewhat difficult to find due to the emerging nature of this field. However, two works have been instrumental in the initial research on the subject and additional resources will be drawn from many of the references cited within these papers. The NSF Workshop on Creativity Support Tools provided a foundation for an understanding and definition of creativity support (Hewett et al 2005). This paper has lead to the discovery of additional resources organized under the Information and Intelligent Systems branch of the National Science Foundation. Despite the usefulness of this information, the research is related to general design and the Conceptual Model of Design Creativity

32


has provided a more focused picture of creativity within the architectural discipline (Fakhra 2012).

Design Constraints Various case studies and standards will create the basis for research into design constraints. The Beach-Howe Tower, designed by BIG provided an inspirational example that helped to define the first but most simplistic relationship between massing and building setbacks. The most important aspect of this project was the 3-dimensional setback established by a raised highway that ran above the property. Additional inspiration and design constraints related to light penetration have been explored using the Well Building Standard (Delos Living LLC 2015). This standard is especially helpful because it establishes best practices based on referenced research concerned with how the built environment impacts bodily systems.

Computational Methods of Generation and Analysis A wealth of existing information exists regarding the workings of computational tools which

33

enable a more detailed understanding of how to work with them. A majority of the available references are technical in nature and describe purely functional aspects which makes evaluation of source quality unnecessary. However, the wealth of information that informs the method of application for computational tools is somewhat more debatable. The primary resource used for preliminary information in this area is a doctoral dissertation Modeled on Software Engineering: Flexible Parametric Models in the Practice of Architecture (Davis 2013). This dissertation presents a thorough examination of computational practices in architecture and contrasts common practices with those found in the field of software engineering. Advanced practices for parametric modeling are presented based on agile programming methods. Another doctoral dissertation is Towards a Generative Design System Based on Evolutionary Computing (Estkowski 2013). This dissertation proposes the theoretical framework for the development of a computational tool concerned with the application of generative design to architecture. Although the theoretical framework


proves of little use for our purposes due to its different direction of emphasis, the analysis of its possible use based on the real-world experience of the author is invaluable.

User Interface The topic of focus for user interface came relatively late to this study after its importance was emphasized by research on creativity support from NSF Workshop on Creativity Support Tools (Hewett et al 2005). The best practices outlined in the proceedings from this workshop served as an important introduction to the concepts of UI in relation to creativity support. Additional primary research on the subject comes from Parameters Tell the Design Story: Ideation and Abstraction in Design Optimization (Bradner et al 2014). The methodologies used to research the use of design optimization in architectural conceptualization included semistructured interviews and surveys of 18 designers. Conclusions are conservatively presented by support the desire for a conceptualization tool based on parametric design tools.

34


Chapter Three : Research and Analysis

35


3 Computational and Conventional Process To achieve the stated goal of integrating computational methodologies with conventional architectural processes it is important to understand the conventional architectural process. The workflow and methods employed in conventional architectural development shape the manner in which computational methods should be used and the types of user interaction necessary. The limitations of the conventional architectural process suggest where the application of computational methods can have the greatest positive impact on the process. A comparison between conventional architectural processes and computational processes highlights some of the issues with current implementations of computational methodologies to ensure that this project avoids replicating these matters. Careful consideration of the benefits and limitations of each approach contributes to the development of a more carefully considered and useful hybrid process that can further augment architectural design.

3.1 The Design Process Model AIA Definition As in the rest of this study, the AIA Handbook serves as an excellent starting point for exploration of this topic. From the ideas put forward in the handbook, we can create a diagrammatic model of the conventional process, gradually moving from general to more specific and applicable. This model will serve as a guide to the analysis and development that follows it, and as such it is important that it be as accurate as possible. First and foremost, it is important to note that although often presented as a linear process, the design sequence is messy and “evaluation often reveals new ways of considering the problem - ways that affect decisions made during previous stages in the process� (American, I. O. A. 2013, p 660). Therefore, the design process is cyclical and could go on indefinitely.

36


Figure 3.1 Cyclical Design Process

Often the limiting factors that put an end to this cycle are time/budget and the perception of diminishing returns. From this, it is important to understand that all models of the design process must be cyclical with the result able to be completed at any point (illustrated in figure 3.1). The AIA describes the conventional architectural process as an “iterative process of generating and evaluating alternatives - methodically refining the project and zeroing in on the final solution” (American, I. O. A. 2013, p 654). In this iterative process, it is important to note a third major action that is implied by the evaluation of alternatives and used elsewhere in the text, selection. Although never specifically defined, this act of the selection of alternatives based on evaluation is quite important to this model of the iterative design process. This idea is illustrated in figure 3.2.

Figure 3.2 Major actions of the design process

37

To achieve these steps of

iteration and evaluation in a consistent and comprehensible manner, a number of conceptual tools are required. The first and foremost of these is the establishment of project objectives. The analysis portion of an architectural project produces a significant amount of data that can be difficult to manage and to implement in an informed manner. According to the AIA, “a common set of objectives should be established... understood by the entire design team... and an appropriate metric determined for evaluating the outcome” (American, I. O. A. 2013, p 659). Through establishing these goals the work of the architectural team can become more focused and the use of data more appropriate to the desired project outcomes. Once goals are established it is important to “prioritize the data in a way that is consistent with the established goals... Ideally, this prioritization provides focus and greater clarity” (p 659). From this description of the


architectural design process by the AIA, one of the most important aspects of the conventional design process is the establishment of a generative logic that drives the project. This generative logic can also be referred to as the concept of vision behind the process (p 660). More specifically it can be said that the generative logic is “a core set of architectural values that result from the established goals and prioritized analysis” (p 660). This aspect of the design process is both the inspiration and the test for design alternatives, and although it may be malleable throughout the process, influenced by the discovery of new information or a better understanding of the design problem, it is always the driving force. Both the importance and the function of this generative logic is illustrated in figure 3.3. At this point is especially important to reiterate that although the ideas of goal setting, prioritization, and the resulting generative logic are important at each stage of the process, the AIA states that “as the design solution evolves and is refined, so, often, are the generative logic and goals. These important values need

to be continuously challenged and validated” (American, I. O. A. 2013, p 660). From these descriptions, the AIA description of the design process can be summarized as a cyclical model containing four broad categories of action (generation, evaluation, refinement, and selection) and a single conceptual tool (generative logic), based on two team-based exercises (goal setting and prioritization). The illustration of this refined model is illustrated in figure 3.4.

Alternatives Further research into these types of design process models shows that although many variations use slightly different terminology, they contain common elements such as a description of the process as cyclical and have similar levels of detail in their expression. A useful comparison to demonstrate these similarities are the models proposed by Markus and Maver, which encapsulate the design process in four steps of analysis, synthesis, appraisal, and decision. In this model of the design process, these steps are thought to take place at ever greater levels of detail as illustrated in figure 3.5. Based on studies of

Figure 3.3 Conceptual tools of goal setting and prioritiztion

Figure 3.4 Conceptual tools of goal setting and prioritization drive creation of the generative logic

38


Figure 3.5 Influence of generative logic on the design process actions of generation and evaluation

the architectural process (Lawson 1997, p 39), it can be demonstrated that architectural design does not necessarily progress by addressing gradually smaller levels of detail. Instead, the design process will often involve moving from one scale to the next to inform the design by the most important aspects. Despite this issue with the model, we can see that the four steps align with those proposed by the AIA (appraisal/evaluation, selection/ decision, analysis/refinement, synthesis/generation). Similar to this model is the work of Schon, quoted from Cross “In order to formulate a design problem to be solved, the designer must frame a problematic design situation: set its boundaries, select particular things and relations for attention, and impose on the situation a coherence that guides subsequent moves” (2004, p 430). This description mirrors the AIA framework of the generative logic of a design problem in which goal setting

39

and prioritization contribute to the concept that guides an architectural project. This same model of generative logic was found in the research of Lloyd and Scott on the architectural design process and referred to as the “problem paradigm” and in the research of Darke concerning expert architects who referred to this process as the guiding principles or primary generators (Cross 2004, p 432). In Darke’s research, she defined a similar design process model using the terms of generator, conjecture, and analysis. In essence, “first decide what you think might be an important aspect of the problem, develop a crude design on this basis and then examine it to see what else you can discover about the problem (Lawson 1997, p 45). Although many writings on the design process frame these steps in a somewhat different order, it is generally agreed that these are the common activities that take place during design. From these commonly


agreed-upon descriptions of the design process, the authors can surmise that the proposed tool should address these areas of action in the AIA model of design. However, a more detailed model and an understanding of how this model looks when compared to studies of the actual practice is necessary to create a comprehensive understanding.

Usefulness in Practice With this model in mind, it becomes important to better understand how architects work within the design process. This understanding can be reached by comparing the outcome of design process case studies and observational experiments to the model created above. Through this understanding, the model can be adjusted and become more detailed and useful for understanding the role that computational methods can play in the augmentation of the conventional design process. To achieve this, a number of observational experiments are compared to the design process model and used to either confirm or refute its accuracy. Additionally, a number of design process case studies were compared to the design process

model and be used to either confirm or refute its accuracy. Any discrepancies found were used to adjust the model. Two useful studies to begin this process were conducted by Eastman in 1970 and Akin in 1986. The study carried out by Eastman asked participants to redesign a bathroom based on an existing design and client criticism of that design. The participants were invited to draw and talk about their process, which was recorded and the results analyzed to draw conclusions. In this study Eastman found that generation and analysis took place simultaneously during the design process with refinement of the design problem being largely dependent on analysis of their solutions. The study by Akin asked participants to design a building, and similar methods of data collection were used, with recordings being analyzed at a later time to draw conclusions. Similarly, Akin found that designers were constantly identifying new goals and redefining constraints through the generation of design solutions. (Lawson 1997, p 43)

Figure 3.6 Influence of design process actions on the conceptual tools of goal setting and prioritization

40


Figure 3.7 The singular mental process containing the discrete actions of generation and analysis

41

These studies identify one major flaw of the conventional architectural process as commonly modeled by theorists. Although described as discrete actions, these findings illustrate that during individual exercises in design these activities occur simultaneously through designing solutions. This conclusion is also supported by a study conducted by Lawson in 1979 in which the problemsolving approaches of architecture students and science students were compared using similar observational methods. It concluded that in contrast to the science students, who focused on understanding the underlying rules of the problem at hand, “architecture students consistently used a strategy of analysis through synthesis. They learned about the problem through attempts to create solutions rather than through deliberate and separate study of the problem itself” (Lawson 1997, p 43). It seems that because architects use this strategy of “analysis through synthesis” it becomes difficult to decompose the individual

activities of design when examining the workflow of any singular architect using conventional design processes. Based on this new information it may be helpful to link generation and evaluation more closely in our diagrammatic process model to represent that although considered independent activities, they are part of a single mental process (Figure 3.7). Ideally, these activities should be allowed to take place simultaneously. Beyond this, it can be said that this process of “analysis through synthesis” may contain significant benefits. Akin and Lin undertook a protocol study of experienced engineers in an attempt to identify the route cause of what were called Novel Design Decisions’ (NDDs). These were defined as decisions that are critical to the development of a design concept. In this case, design activities were distinguished as drawing, examining, and thinking. Quoting Akin and Lin via Cross “six out of a total of eight times a novel design decision was made, we


found the subject alternating between these three activity modes (examiningdrawing-thinking) in rapid succession� (2004, p 436). Although no conclusions regarding cause were drawn, by in essence blending the different activities of design into a single process these NDDs were more frequently achieved. In a similar case a protocol study conducted by Suwa, Gero, and Purcell of an experienced architect found that “not only did unexpected discoveries become the driving force for invention of issues or requirements, but also the occurrence of invention, in turn, tended to cause new unexpected discoveries� (Cross 2004, p 437). These studies suggest that not only is it difficult to separate the processes of generation, evaluation, and refinement of the generative logic - but that it may also be harmful to the design process to do so.

Analysis From this study of research focused on describing the design process, with specific emphasis on the literature regarding architectural design, it was demonstrated that although general models of the design process have their limitations - when

compared with actual design tasks the categories of action are consistently accurate. It is difficult to predict when actions will be taken and in what order, so any work concerned with the architectural process must account for this variability. Furthermore, greater detail is necessary to draw meaningful conclusions from what happens during each action category. To achieve this, a more in-depth exploration of the actions as they relate to deliverables for early stages of the design process will be undertaken, to find out what components will be necessary to achieve these actions working towards the end product. The development of a detailed understanding of the conventional design process was an important factor in creation of the proposed tool. As the stated goal of this project was to work with the conventional design process, it is important that the use of the tool allow for this wide variance in behavior, as well as works in ways that are consistent with the types of activities performed in the design process. From this understanding, it can be concluded that although computational methods can contribute to all aspects of this

42


Figure 3.8 The importance of rapid alternation between generation and evaluation during which the thought process occurs

model of design, perhaps the two most critical areas to address include the establishment and communication of generative logic and the evaluation of proposed alternatives against this logic. Although the generation of ideal expressions of this generative logic is important for communication and the evaluation of proposed alternatives, current feedback suggests that the original goal of creativity support was somewhat misguided. Ultimately, conclusions regarding these questions will be best resolved in testing of the proposed tool, discussed in Chapter 5.

3.2 Cognitive Research and the Conventional Architectural Process At this point, it is important to explore some of the limitations and intricacies of the architectural design process from the perspective of cognitive research. Now that the design process is defined, its limitations must also be understood to outline how successful cognitive augmentation might take place. After a survey of

43

available research, it seems that there are four broad categories which are useful to address. The first is the cognitive limitations on the number of variables that can be accounted for when problem-solving. The second is the distinction between novice and expert behavior in the field. The third is the impact of change blindness on designers. The fourth is the tendency for architectural designers to continue with the development of initial conceptual ideas, even when encountering significant issues with this conceptual framework past the point of diminishing returns. Each of these limitations will be taken into consideration when creating the proposed tool to address these limitations as effectively as possible.

Limitations in Problem Solving A study conducted at the University of Minnesota in 2015 sought to understand how human subjects approached complex decision making and the cognitive limits of their abilities to plan ahead to achieve the


highest rewards. This study found that “even when the information needed for the prospective computation is explicit and unambiguous, planner’s limited computational power restrict the depth and quality of prospection” (Snider et al. 2015, p 2). In essence, this means that people do not employ simplification strategies to decision making, and instead, attempt to use all information available to search for the best path through multiple dependent decisions. Due to cognitive limitations, this approach reduces the depth and quality of planning. This conclusion suggests that the use of tools to overcome these cognitive limitations is of the utmost importance if attempting to achieve the best outcome from a series of decisions. In the context of architecture, this means that unless the appropriate tools are utilized when attempting to balance the many complex decisions involved in architectural design, resulting solutions will generally be sub-optimal. Although this does not imply that the final architecture will not satisfy the requirements of the occupants, client, and architect, it does mean that the final architecture

won’t likely represent the best possible configuration. This distinction may appear petty, but if the goal is to approach perfection in the built environment employing increasingly efficient processes is essential to completing higher satisficing solutions. This leads to the final assumption that if better processes are employed, a higher satisficing solution can be achieved than with conventional processes, if both approaches are held to the same time limitations. Alternatively, it can be said that if better processes are employed similarly satisficing solutions can be achieved in a shorter time than when employing the conventional architectural process. This study also found that when recalculating future actions at every step, the depth of the analysis dropped (Snider et al. 2015, p 19). This suggests that the continual synthesis and analysis process architects undertake hampers the amount of parameters that can be considered. Additionally, when pressured with time constraints the only way that participants maintained a high depth of analysis was to delay evaluation and refinement of alternatives (Snider et al. 2015, p 19).

44


These conclusions suggest that if a firm wants to achieve the highest level of results from inexperienced designers in deadline situations, they will employ processes support the analysis and refinement of design options beyond the cognitive limitations of designers. Such a solution would likely address three of the four actions in the design process model including goal setting

Figure 3.9 Cognitive Limitation in Problem Solving

45

and prioritization, generative logic, and generation (as illustrated in figure 3.9).

Comparing Design Experience In the study of the differences between novice and experienced architectural designers, the work of Nigel Cross was particularly important to begin with. In an overview paper regarding expertise in design, Cross gathers research from a number of different sources to examine the commonalities and differences between novice and expert designers in the fields of architecture, engineering, and industrial design. The first important aspect of this overview was taken from studies of Kavakli and Gero in which they compared the cognitive activity of an experienced and novice architect over the course of a design task. In this study, the authors found that the experienced architect had nearly triple the cognitive activity with higher levels of simultaneous cognitive action. Additionally, it was found that the novice architect had an early peak in cognitive action followed by a steady decline in activity throughout the design task while the cognitive activity


of the experienced architect continued to increase throughout the design task. The study concluded that it was likely that “the expert seems to have control of his cognitive activity and governs his performance in a more efficient way than the novice, because his cognitive actions are well organized and clearly structured� (Cross 2004, p 431). From this, we can conclude that the novice architect requires more assistance with the organization and structuring of their workflows. Additionally, it could be theorized that the organization and structuring that takes place in the mind of the experienced architect represents a learned method to overcome the cognitive tradeoff between depth of thought and recalculation of direction identified by Snider et al. These conclusions have special significance when combined with the demonstrated necessity for the creation of a generative logic and constant evaluation of design alternatives against that logic. While it seems that experienced architects have learned strategies to overcome these limitations, the creation of a system to guide novice architects in this task

would likely be highly beneficial for increasing both the amount and quality of their output. Such a solution would likely concern three of the four areas outlined in the design process model including evaluation, goal setting and prioritization, and generative logic (as illustrated in figure 3.10).

Figure 3.10 Novice Organization and Structure

46


Finally, a study of outstanding architects conducted by Lawson that utilized interview and observational methodologies found that one common quality of this group was to work on parallel lines of thought far into the project. That is, to consider multiple levels of detail simultaneously throughout the design process without committing to a concrete decision in any area. Quoting Lawson via Cross “a degree of bravery is required to allow these lines of thought to remain parallel rather longer than might seem reasonable to the inexperienced designer” (2004, p 436). This suggests that one possible area for improvement in novice designers is the ability to create workflows that encourage the continual variation of design details at every level of detail simultaneously.

Visual Perception Like other cognitive limitations, those associated with how we perceive and understand visual information are driven by the cognitive processing required to use that information. In essentially all cases the greater cognitive load required, the more likely the user is to take longer at the task and to have errors. In the Handbook of

47

Visual Spatial Thinking it is explained in this way “the mental transformations necessary to align or compare frames of reference are cognitively demanding... these demands are reflected in increased time, increased likelihood of errors, and increased mental workload” (Wickens et al 2005, p 4). In addition to this cognitive load imposed by the requirement to translate information from one environment to another, it is also important to note the phenomenon of change blindness. Due to the cognitive mechanisms involved in visual perception and the storage of visual information, change blindness is defined as “the striking failure to see large changes that normally would be noticed easily” (Simons et al. 2005, p 15). Perhaps the most famous example of this phenomenon is an experiment conducted by Daniel Simons and Daniel Levin where “a trained actor approached an unsuspecting member of the public, map in hand and in a crowded place with lots of pedestrian traffic, and began to ask for directions. Then, by means of a clever maneuver involving two work-men and a door, a second actor replaced the first in the


middle of the conversation. The second actor could have different clothing and different hair color, yet more than 50 percent of the time the unsuspecting participants failed to notice the substitution.” (Ware 2010, p 1). The appearance of these results in such a natural setting supports the conclusion that change blindness is due to “a general failure to retain and/ or compare information from moment to moment. Moreover, this counter intuitiveness itself is of scientific interest; most people firmly believe that they would notice such changes” (Simons et al 2005, p 17). Despite the best intentions of people and the general belief by most that they are not subject to these cognitive limitations it has been shown by many studies to have quite a large effect of how the each of us perceives the world. Furthermore, we would expect that by simply paying close attention to a feature changes would be easy to detect, but again research shows that “although attention is necessary for conscious change perception, it might not be sufficient. Changes to attended objects frequently go unnoticed, particularly when the changes are unexpected”

(Simons et al. 2005, p 17). Although these studies do not particularly focus on the cognitive limitations of designers, the implication of this information is quite important to the study of process in architecture. The conclusions that can be drawn are twofold. First, that to ensure the ensure that visual information is both processed and accounted for, it is important to utilize graphic methods that make information highly apparent based on visual processing

Figure 3.11 Visual Perception - Blindness

48


mechanisms. And second, that to decrease the cognitive load imposed by visual processing and translation needs in the digital environment, that information should be placed in the same frame of reference as much as possible - and that reasons for the use of separate frames should be compelling. Although somewhat concerned with the facilitation of all phases of the design process model, such a solution would likely be primarily concerned with addressing these issues with the evaluation phase (as shown in figure 3.11).

Opportunistic Behavior Common to all designers, and perhaps an issue arising from the use of a generative logic to guide the development of a project, is the tendency to continue using preliminary concepts long after they have encountered serious problems. This phenomenon was found in case study research of professional architectural design conducted by Rowe, as well as in studies conducted on engineers by two separate authors, and in studies of software engineers conducted by Guindon (Cross 2004, p 433). Speculation from Guindon regarding

49

this phenomenon suggests that this behavior may be a natural consequence of the ill-structuredness of problems in the early stages of design� (Cross 2004, p 436). It is possible that the effort required to redefine the generative logic and revise preliminary concepts is not seen as rewarding enough to be worth the effort. From our case studies of larger architectural firms such as BIG, we found that these preliminary concepts are revised often in the early stages of design. This disconnect between the practice of less constrained firms and the results of studies concerning the general working population suggests that the root issue is not cognitive, but resource related. Therefore, if a tool allowed designers to more quickly consider modifications to their initial concepts, it is likely that they would take advantage of this opportunity. If the proposed tool made changes to the generative logic and initial concepts more readily accessible, it is likely that further design options could be considered, ultimately leading to a more comprehensive design solution. Although somewhat


concerned with the generation of solutions due to the alteration of an ideal form based on changes to other areas of the design process, this solution is mainly concerned with addressing the alteration and representation of generative logic, and of goal setting and prioritization (as illustrated in figure 3.12).

Although certainly not a comprehensive list, these factors represent those that seemed most compelling and widely accepted in the cognitive research reviewed. Additionally, these limitations suggest solutions that upon preliminary examination lie within the proposed scope of the thesis project.

Analysis From these studies we have identified four cognitive limitations that can be seen in the conventional design process: Trade offs that must consistently be made concerning depth and frequency of planning which leads to a reduction in one or the other A lack of intuitive organizational strategies for novice designers Difficulties with identifying significant differences when visually evaluating design alternatives The tendency of architectural designers to stick with sub-optimal concepts when faced with time constraints Figure 3.12 Opportunistic Behavior in Design - Limitation on Restructure Logic

50


3.3 Limitations of Computational Methodologies Each of the outlined cognitive limitations suggests a corresponding computational solution: the creation of a system to encapsulate the full depth of design information, the creation of an organizational system for information, the overlay of this information of digital representations of a design solution, and the ability to rapidly recalculate the goals and priorities of a project to produce new iterations when under a time deadline. These computational solutions represent the full scope of the proposed tool as a method to augment the conventional architectural process. To further understand the implication of these conclusions and to develop solutions for each issue, an examination of the limitations of current computational methods will serve to highlight existing solutions to these issues and the reasons that they do or do not solve these problems. Although initially comparison of each of these issues against a single program was desired, the ability to comprehensively search for and test these features from all existing

51

architectural programs represented far too large of an undertaking for the benefit gained. Therefore, this comparison was limited to the four most popular programs for architectural 3d modeling (Revit, Sketchup, Rhinocerous, and Maya) and to commonly used analysis plugins (as defined by our exposure to these tools in related research). From this examination, best practices for User Interface design can be contrasted to currently existing computational methods and final recommendations made for the form of computational methodologies that will comprise the proposed tool.

Depth and Frequency of Planning This cognitive limitation suggests the need to create a system which encapsulates the full depth of design information without imposing a large cognitive load on the designer. At present none of the 3d modeling programs implement a solution to assist in overcoming this cognitive limitation. In other 3d modeling programs such as Generative Components by Bentley a history based approach to parametric modeling is


employed that allows for an overall view of the logic that has created the 3d model, but none of the systems examined help to organize the myriad of outside factors that contribute to design. Research suggested that these limitations are partially due to the tendency of humans to use brute force calculation methods for solving problems rather than simplified heuristic strategies (Simons et al 2005),

which when difficult to implement for tackling a large problem seems to result in the selection of just a few issues that are selected to address as inspiration for the design. Given these limitations some sort of an organizational method for relevant information is necessary. To achieve this, the proposed tool will utilize a sidebar approach within which all of the relevant information for the project is contained as an addition to the conventional 3d modeling environment (shown in figure 3.13).

Figure 3.13 User Interface - Side Bar Concept

52


Organizational Strategies This cognitive limitation suggests the need for the creation of a system for the organization of information and design strategies to be used by novice designers. Sefaira is a good example of a computational methodology that addresses this limitation. Through analysis, this program provides suggestions to alter the design to achieve higher performance in particular areas. This strategy is useful, but the disconnection between the analysis and modeling environment requires a high amount of manual adjustment and iteration by the

designer. Although this independent strategy is likely somewhat successful, integration of these strategies within the 3d modeling environment can increase the explicit statement and relationship of design strategies to the formal output. To overcome this limitation, the creation of benchmark forms based on a variety of strategies to alter the existing massing will assist in the formalization of design strategies. Additionally, an organizational strategy to outline which strategies were chosen and their amount of

Figure 3.14 User Interface - Organization and sub categories

53


influence on the form will assist in the continued development of the design in a manner more in line with the cognitive strategies employed by expert designers. This basic organization will provide some structure to the design generation and analysis conducted by the novice architect (as shown in figure 3.14). This cognitive limitation and will be further addressed with regards to the restructuring and reorganization of the generative logic, and of goal setting and prioritization.

Visual Identification of Change This cognitive limitation suggests the need to overlay relevant information within the digital representation of a design solution and to clearly highlight the relative differences between proposed design solutions. To address this cognitive limitation the proposed tool includes three features including: Display of analysis information always takes place in the appropriate reference frame Display of analysis information utilizes the use of color against a non-colored background

Analysis metrics are displayed alongside a colored percent difference metric that compares the current iteration against the ideal form based on generative logic.

This is a category in which many useful programs exist for comparison against the proposed tool, due to the popularity of environmental analysis. For example, Autodesk Ecotect (which was recently integrated into Revit) can perform a variety of environmental analysis functions within the 3D Revit environment using both detailed models and conceptual analysis. Multiple iterations can be compared against one another to determine the best performing results and relevant diagrams created. However, Lawson notes that “Modern building science techniques have generally only provided methods of predicting how well a design solution will work. They are simply tools of evaluation and give no help at all with synthesis� (1997, p 58). This analysis certainly applies to Ecotect and represents the barrier that the proposed tool is attempting to break. The analysis methods provided by ecotect solve the common issue of the unnecessary

54


Figure 3.15 User Interface - Display within 3D environment

cognitive load introduced by requiring the architect to translate between a two dimensional diagram and a three dimensional modeling environment introduces an unnecessary cognitive load (Wickens et al. 2005, p 9). This display of analysis information within the 3D environment it is more readily understood by the viewer within the context of other information and will be similarly implemented in the proposed tool (as shown in figure 3.15) The use of color against a non-colored background increases the level of contrast as seen by the primary

55

visual processing systems of the brain. This is supported by the understanding: “Visual distinctness has as much to do with the visual characteristics of the environment of an object as the characteristics of the object itself. It is the degree of featurelevel contrast between an object and its surroundings that make it distinct... The simple features that lead to pop out are color, orientation, size, motion, and stereoscopic depth� (Ware 2010, p 29).

When applied to the sorting of multiple objects or features simultaneously:


Figure 3.16 User Interface - Highlighting the currently relevant information “the solution is to use different channels. As we have seen, layers in the primary visual cortex are divided up into small areas that separately process the elements of form (most importantly, orientation and size), color, and motion. These can be thought of as semi-independent processing channels for visual information� (Ware 2010, p 33).

This is one area where Ecotect is lacking. Through the use of an interface that is rather busy and draws attention to many areas

equally, no elements are highlighted for focus and the cognitive load for visual processing rests solely with the architect. In contrast, the proposed tool utilizes as neutral of a background as possible, only highlighting the currently relevant information through use of feature-level contrast. By utilizing this approach for supporting cognitive methods of visual processing, relevant information is more quickly understood and placed in the context of design (as shown in figure 3.16).

56


Figure 3.17 User Interface - Changes in form

Finally, the use of the ideal form and a percentage change metric to compare the two helps uses contrast to highlight the differences that may not readily be noticed by the designer even when paying close attention. This represents perhaps the most important distinction between the proposed tool and the existing alternatives. Through the creation of an ideal form which represents a target, the full possibilities of the site are realized and allows the comparison to move beyond simple

57

analysis. The full implications of this approach are discussed in more detail in the next section. By utilizing this approach changes to form and the differences between the ideal configuration and current design are always highlighted for easy processing and use (as illustrated in figure 3.17).


Figure 3.18 User Interface - Goals and Priorities Selection

Avoiding Opportunistic Behavior with Limited Resources This cognitive limitation suggests the need to rapidly recalculate the goals and priorities of a project and to see the resulting implications of these changes in real time. Since none of the programs examined utilize a system for the collection and organization of the goals and priorities that drive generative logic, it follows that none of the systems provide tools for the rapid recalculation of these factors to perceive the real time effect of them on ideal building forms.

To address this cognitive limitation a section within the tool is necessary to make explicit the goals and priorities of the project (illustrated in figure 3.18), as well as a to produce a diagrammatic representation of the resulting generative logic (illustrated in figure 3.19). Additionally, the goal setting and prioritization process would result in the calculation of an ideal formal output determined by the combination of ideal configurations for each selected factor (illustrated in figure 3.20). In this way, the process encapsulates all stages of the

58


Figure 3.19 User Interface - Diagrammatic representation of the resulting generative logic

diagrammatic model of the design process and helps to address issues of each of the other cognitive limitations previously identified.

Methods of Evaluation and Comparison Perhaps the final and most important issue with computational systems is one that Cross brings up when discussing how design decisions are made. The point is two-fold and can be summed up rather precisely. First, that to rank architectural designs

59

using the quantitative and qualitative factors important to the project a framework needs to be created to normalize values and to quantitatively express qualitative factors (Cross 2004, p 63). Second, that in this process the subjective nature of qualitative factors is merely shifted from how we talk about them to how we determine their quantitative value (Cross 2004, p 81). Cross concludes that due to these factors correct or optimal answers to design solutions do not exist, suggesting that the creation of such


Figure 3.20 Combination of ideal configurations

methods of evaluation is ultimately of little use. The point of this argument is valid and warrants both consideration and discussion, but it seems that perhaps the conclusions are somewhat hasty. If the objective of ranking and evaluation is to wholly justify the choice of one design option over another, then this criticism is valid. However, the need for such an evaluation framework moves far beyond removing the responsibility

for an ultimately subjective decision from the architect or client and presenting it as a scientific inevitability. Therefore, despite the possible arguments against such an approach, the proposed tool will implement such a framework as an analysis tool to assist in the design process. As much as this tool will be used to evaluate building alternatives against one another, it will also be useful in the clarification and expression of design goals and priorities. Since one of the most emphasized aspects of the

60


generative logic proposed in the AIA handbook was related to its use as an organizational and communication strategy for the design team (American, I. O. A. 2013, p 660, such a framework would be helpful as a way of better understanding these priorities in the context of design solutions. Just as it may be difficult to define and priorities design goals without a design to critique, so too it is likely that it is difficult to accurately define priorities without a method to weight these priorities and determine if the computed result aligns with the chosen alternative. Although a member of the team may say that they value the design of passive solar strategies over aesthetics, they may consistently choose iterations that suggest otherwise. By tracking this and comparing it against the evaluation framework, it could become apparent earlier that stated goals don’t align with true preferences - helping to avoid miscommunication.

61

Analysis The ability to address any of the cognitive limitations identified as important to this project certainly exists. From non-digital methods to custom parametric scripting, organizational strategies could be individually implemented to ensure that these factors have a minimal effect on the functioning of a design team. However, it is a driving philosophy of this project that to the greatest extent possible, the full design needs of an architectural office should be embedded within a single continuous system. By taking this approach information can be more easily managed and the relationships between different aspects of the project explicitly understood. Although the benefits of this approach are not similarly valuable to everyone (experienced architects do many of these actions internally), the authors speculate that these computational methods can have a large impact on the workflow of inexperienced architects or new firms. Therefore, based on the cognitive limitations examined and the limitations of existing computational methodologies, this thesis proposes the integration of these four novel workflows within a 3d modeling environment to augment the conventional design process during the pre-design and schematic design stages.


62


Chapter Four : Design Process

63


4.1 Phase One From the beginning, this project has been primarily concerned with blending computational and conventional methodologies of design. However, it is understood that this theoretical scope extends beyond the time constraints imposed by a thesis project. To truly address this question will require a lifetime of work, and this effort represents just the first step in that journey. In this phase of the thesis process, initial research guided development of the theoretical vision, project comparisons, and critical position that would define the coming terms.

Theory Based on the writing of Paul Tesar (Hargrove 2008, p 80), as referenced in Chapter 1 page 4, the authors envision this work evolving to encompass the entire architectural design process. Consequently, this process may become a conversation between human and computer where objective relationships and requirements are constantly suggested

by the design environment. We have seen the effects of this type of workflow on construction documentation and management with the implementation of BIM drastically altering the latter stages of the design process (Referenced in Chapter 1). However, the exploration of these ideas in earlier stages of design is just beginning to take place. Ideally, this type of interaction would leverage computation to make the objective aspects of design more efficient, allowing the architect more latitude within the subjective aspects of design. Although this vision for the future of design may appear problematic to some readers, architects have an opportunity to guide the implementation of these ideas by claiming control over the authorship of their digital tools. By maintaining an active role in architectural toolmaking, architects have the opportunity to consciously shape the future of the profession in the face of a constantly evolving digital world.

64


Comparison The search for comparisons to the proposed project proved difficult. The majority of available computational tools focus on analysis and optimization for detailed design. Studies have shown a desire to use computational tools as a starting point for architectural design exploration. However, most of the current efforts in this area are undertaken as proprietary solutions designed on a project by project basis using visual programming methods such as Grasshopper or Dynamo. (Bradner et al, 2014) Beyond these proprietary efforts, two software packages are currently in development which attempt to apply computational methods to the early stages of architectural design. Google Flux and Autodesk Dreamcatcher are examined in detail starting at page 118 of the appendix. The most common criticisms of these projects center around questions of authorship in the architectural design process. An apparent lack of transparency in the design process utilized by these programs left architects with the impression that their design process

65

was being replaced rather than augmented. These findings were further reinforced by the work of Estkowski, who posited that using the computational methods of others generally relinquishes some amount of authorship over the building forms to the computational designer (Reference Estkowski). Therefore, one major goal of this work is to limit this effect in the way in which information is presented and forms are created.

Position Based on the work examined in Chapters 1 and 2 of this book, a theoretical outline of goals and methods employed to create the proposed program was created. Ultimately, this work can be condensed down to three main ideas that each address specific aspects of merging computational and conventional processes in a holistic manner: Generative exploration of the design space should be used to create iterations for comparison and inspiration of conventionally created designs.


The program should focus on creativity

What the project goal is?

support through an intuitive user interface

Why this goal is important?

that allows for rapid iteration and

How this would be different than current

understanding of design information.

approaches? How this would be achieved?

Parametric scripting should be undertaken in

Types of research required

a way that avoids embedded logic as much

Proposed schedule for work

as possible, as a way to maintain authorship of the designer using the prototype tool.

These concepts represent the driving forces behind this proposal and are what the authors found to be the most critical aspects of a successful collaboration between computational methods throughout the undertaken research.

Presentation At this stage, presentation of the work was simply concerned with creating a logical and easily understood explanation of project goals and directions. Although clearly envisioned by the authors, specifics of the proposed tool would have been difficult to present or receive feedback upon at such a theoretical stage. The structure of the presentation was as follows and presentation slides can be examined in further detail in the Appendices on page 145:

Feedback and Response Feedback from this stage was varied, but was mainly concerned with the inclusion of more details to the current examination. Questions were posed concerning the relationship to historical goals of digital tools, the context in which the proposed tool would be used, the relationship to existing design research, the methods of evaluation, and how the proposed tool could be compared to existing processes. Due to the varied audience for this tool, more depth and detail were suggested in a variety of directions while simultaneously seeking a more concise and simply stated presentation of this detail. In response to this feedback, additional detail was included in Chapter 1 of the book regarding the history of digital tools. The context of the proposed tool was considered, and

66


an office tower typology chosen as the most achievable route to demonstrate the proposed tool. Presentation methods were refined to focus on more graphic methods of representation and language adapted to contain fewer computational specific terms. The challenge to explore design research and theory more fully was recognized as an aspect of this project that would need to continue throughout the year. Questions concerning methods of evaluation and comparison were noted, but the authors felt that without a working prototype it would be difficult to adequately understand the most effective methods of evaluation and comparison against existing tools.

67


4.2 Phase Two Based on the theoretical outline created during phase 1 of the project, phase 2 was mainly concerned with the creation of basic functionality for the prototype tool and outlining a realistic workflow that would maintain a consistent level of detail throughout the project. Case studies regarding the chosen typology were undertaken, methods of creating a user interface were studied and critiqued, initial program functionality concerning analysis and massing functionality was created, and experiments concerning scripting methodologies were performed.

Typology Case Studies To address the context in which the prototype tool would be created one primary step was the study of office building typology to understand the variables that would most influence the types of pre-design and schematic design analysis, as well as the effects of these variables upon building form. These studies examined space usage

type, zoning constraints, exiting requirements, and building height as the main constraints upon building form. More detail on these studies can be found in the appendices on page 124.

User Interface Research and Ideation Studies of existing user interfaces were also conducted to determine the types of interaction that made a program intuitive to use and understand. These case studies involved the examination of a number of commonly used architectural programs including Revit, 3DS Max, Rhinoceros, and Grasshopper. Additionally, material design standards from Google and Apple were examined to determine the guiding principles of these standards and the reasons for their use. Ultimately, three principles of user interface design were determined to be essential including: Creation of a clear and consistent organization throughout the interface

by number of occupants, structure

68


Economical use of user cues to simplify the interface Communication methods that matched the capabilities and expectations of the end

Foremost, it is important to note that all development for the prototype tool took place within Rhinocerous, using the Grasshopper visual programming environment. This choice was made due to familiarity with the environment. While better options were discovered for the creation of such a tool, the project was undertaken with the goal to create a proof of concept rather than a fully functional and distributable plug-in. This was important because none of the authors had previous experience with programming or the design of user interfaces. The two team members with previous Grasshopper

Initially, experiments into creating program functionality were created based on the exploration of conceptual drivers for significant projects. It was proposed that strategies could be created based on these driving concepts that would inform the schematic design of new work. This vision of the prototype tool was based on the case study method of design exploration that had been primarily experienced during previous studio projects. Scripts were written within Grasshopper to simulate each of these factors and translate the results of each simulation into alterations of building massing. Diagrams further detailing this work can be found in the appendices on page 132. Despite successful simulation, feedback indicated that these concepts addressed situations perceived as too specific to the projects selected and it was suggested that more generally applicable variables be explored.

experience had been using the program for one year at the start of this effort. Therefore, desires to create a useable tool were balanced with realistic

Additionally, it was suggested that to improve the continuity of the project and clarify demonstration of

user

More detail on this aspect of the research can be found in the appendices on page 129.

Initial Program Functionality

69

expectations for the amount of outside learning that would have to go into this project.


the core concept, it would be helpful to maintain a similar level of detail across all levels of the proposed tool. A decision was made to focus on the aspects of analysis and generation that could come from the urban form and environmental factors. Proposals to explore the effects of human actors on the building form were abandoned at this time, completely removing one major area of the initial concept. Revised scheduling charts concerning this change can be examined in more detail on page 131 of the appendices. Althou gh this meant abandoning certain areas of interest that had already undergone preliminary development in each of the major areas of the initial concept, the transition did create a greater sense of continuity and cohesiveness within the project. Additionally, this decision facilitated easy explanation of directions for future development of the proposed tool with regards to increasing the level of detail and scope of the project. After this feedback, a series of factors were chosen that more closely related to the variables identified in typological case studies and three major areas of work were determined to

assist in the pre-design and schematic design process: architectural program, regulatory compliance, and site analysis. Within these areas, scripts were written in Grasshopper to experiment with the simulation. These early efforts served an important role in the exploration of the overall project, demonstrating the feasibility of the use of such a tool across a variety of levels of detail but emphasizing that progress needed to take place at a larger scale first. Perhaps one of the most important aspects to come from these early explorations was a better understanding of the computational resources required to operate such a tool. In these early efforts, operations often took frustratingly long, and it became clear that to facilitate exploration it would be desirable to present changes in as close to real time as possible. This led to the introduction of one of the core premises that has driven this project, the focus on mathematical operations over geometric operations.

70


Scripting Methodology Experimentation This concept of mathematical operations was first discovered while examining examples of view analysis components currently available as Grasshopper plugins. View analysis components from Ladybug and Neoarchaic were evaluated, but determined to be too slow for real-time exploration within a user interface perceived as separate from the Grasshopper scripting environment. Initial testing of these components indicated minimum processing times of 4 seconds and 2 seconds respectively, on the computer and in the context in which we were currently scripting. These times exceeded the desired processing time of less than 1 second for all components. Therefore, a simple script was built from scratch to produce the required information in a shorter time. By evaluating the context based on a mathematical mesh-ray intersection, processing times of less than 1 second were achieved. This method of mathematical operation in the visual scripting environment became one of the core elements of creating the proposed

71

tool. These mathematical and point based operations were used to conduct analysis through brute force processing of all possible alternatives before the creation of representative geometry, dramatically reducing processing times when compared to commonly demonstrated methods of scripting within Grasshopper.

Presentation At this stage presentation of the work was concerned with demonstrating typology and user interface research, proposed design components and a schedule for their development, and the basic functionality of current scripting. Details of the visual presentation can be found in the appendices on page 151. Physical boards and a short animation were used to demonstrate this work in the most understandable manner, considering previous feedback that sought a more concise and visual method for presentation of the work. In addition to the research presented on physical boards, this animation illustrated working parametric scripts including the creation of a 3D topography from topography lines, the extrusions of buildings within


this topography based on footprint and height, the creation of a zoning mass using lot boundaries, offsets, and building heights, and the preliminary analysis and display of solar studies, wind data, and view analysis.

communicated with any sort of clarity. Based on this reality, plans were made to ensure that all major criticisms except comparison could be addressed by the end of phase 3.

Feedback and Response At this stage feedback centered upon more clearly defining traditional methods of production and the limitations involved with these methods, methods of comparison between the proposed tool and conventional workflows, clarity regarding the purpose and context of the proposed tool, and how user interaction would take place. The majority of this feedback was expected prior to presentation at this time. Due to inexperience with the creation of parametric tools and previously cited difficulties concerning drawing a comparison without a prototype to compare against, similar issues were highlighted in this phase as at the end of phase 1. However, it was clear from this feedback that major progress towards the demonstration of the prototype tool would need to be made before the project could be

72


4.3 Phase Three As previously mentioned, during this phase it was critical to make significant progress on a number of factors to make the transition from a theoretical outline to a functional prototype tool. Making this transition required the simultaneous development of process research, parametric functionality, and user interface.

Cognitive and Process Studies At this time a thorough examination of conventional architectural design processes was undertaken. This examination focused on the examination and criticism of several design process models and reflection upon the limitations of these models as outlined in chapter 4. This research created a more focused approach to the representation of schematic designs and analysis within the proposed tool. Concepts from this research were used to emphasize the essential aspects of the program and to provide predictions for how the use of such a tool could help to overcome

73

traditional cognitive limitations. These studies proved to be a turning point for this study, providing support and facilitating a more informed approach to the creation of user interfaces.

Functionality Due to the time constraints imposed by the large amount of progress that needed to be made, several problematic components of previous functionality were abandoned including architectural program, topographic construction, and rainfall analysis. New functionality was developed including constraint of the building form according to IBC table 503 and the generation of form based on a massing and paneling strategy related to view optimization.

UI Development From the cognitive and process studies, it appeared that the quality of interaction with the parameters from within the modeling window would be the aspect that most influenced the success or failure of the proposed


tool. Several options to create an interface between Grasshopper and the Rhinoceros modeling environment were examined, with the most likely being a combination of Human for the display of relevant information and control over relevant parameters through the Remote Control Panel (RCP) toolbar within Rhino. Ideally the proposed tool would have been completely coded within the Rhino Common or Rhino C++ SDK, however limited familiarity with these languages and no previous team experience with the development of a software plugin made this option seem unrealistic within the time frame. Serendipitously, Human UI was released at the same time these questions were being considered and its platform provided the best balance between ease of use and functionality. It provides an excellent environment for the creation of a proof of concept mockup, allowing for demonstration and a functional outline of the proposed tool upon which future development could be based. Perhaps the most problematic aspect of these early attempts at

creating a user interface in this manner was that previously created scripts had to be adapted to fit the expected structure for the Human UI plug-in. A number of the outputs previously created for diagrammatic purposes required modification for use in this format. Although functionality of this plug-in and knowledge of the authors in its use limited many of the user interface proposals previously created, a balance was struck by layering Human and Human UI elements. This format limited adaptability of the prototype to different screen resolutions, but achieved a high number of the user interface elements previously envisioned.

Presentation Presentation at this stage mainly focused on explanation of cognitive and process studies and their implications for the interface and prototype tool structure. Physical boards were again combined with a video to demonstrate both the research and parametric functionality side by side. Details of this presentation can be examined in further detail on page 147 in the appendices.

74


Feedback and Response Common feedback from this presentation included a concern about the lack of acknowledgement of subjective aspects of design, questions about what variables and assumptions lie behind the generation of massing forms, a desire to differentiate between essential and optional variables, and suggestions to expand and improve the available variables. The concerns about acknowledging subjective aspects of design have previously been considered, but it is important to consistently place this discussion at the forefront of any presentation of this prototype. Similar issues of word-choice that have persistently occurred in discussion of this project, and it is illustrative that the manner of presentation is as important as the work and product for this thesis. Questions regarding the variables and assumptions that lie behind the generation of form are quite important but difficult to address within the program environment. Ideally, this issue would be solved with informational tooltips for each

75

variable that illustrate more clearly the relationships and outcomes involved in each component. However, currently that ability is limited by both time and ability to create these effects. This area would be especially important to address in any future development of the project. The desire to differentiate between essential and optional variables, as well as suggestions to expand and improve the available variables were both considered to be quite helpful but are difficult topics for a short reflection. Instead, these aspects are addressed more fully in the process summary of phase 4 and the future development section of the conclusion.


4.4 Phase Four In this phase of project development, existing issues needed to be to be addressed to finalize basic functionality. This functionality included the automatic creation of context, a more highly controlled method of generating benchmark forms, manual manipulation of user generated forms, form comparison, and the ability to insert and analyze user created 3D models. Each of these was considered an essential aspect of the prototype tool, but had previously been neglected due to time constraints. However, there were still many aspects of program functionality that had been partially developed or were still undeveloped that were required for complete expression of the original prototype concept. These included the refinement of architectural programming methods, the development of a cost estimation system, the inclusion of exiting requirements, and refinement of the effect of IBC table 502 on building form.

Ultimately, with a realistic timeline of 5 weeks for completion of the proposed tool, priorities had to be set and the essential aspects of the program outlined above were deemed necessary to include for demonstration. This decision meant that the partially and undeveloped aspects of the prototype would be researched but not fully implemented in the final prototype. Additionally, the issues of evaluation of the prototype and comparison against conventional methodologies brought up in criticisms from phase 1 and phase 2 needed to be addressed through testing, which was scheduled to take place in phase 5. Context Creation One aspect of the core functionality of this project that was assumed up to this point was the creation of context upon which to base the subsequent analysis. For early efforts some effort was put towards resolving this question (topography surface creation), but mainly prepared context models were

76


used for illustrative purposes. This was because the creation of analysis functionality was previously deemed more immediate of a need than the automatic creation of context. However, at this point the creation of context information became a more pressing need to complete functionality of this prototype tool. Exploration of methods to achieve this construction of context led to the identification of three information sources: Open Street Maps (OSM) data, regional GIS data in the form of .shp files, and 3D models created by Google for use in Google Earth. The topography and 3D modeled buildings contained in Google Earth currently seem to represent the most accurate and complete context data available. However, upon examination it was determined that extracting the information for use was, in the current project, impossible. The depreciation of the Google Earth API means that at the time of this writing, access to Google Earth data in this manner will be discontinued at some point in the next 8 months. It is likely that an alternative method of access will be created at some point in the

77

future, but putting resources towards the use of disappearing method of access was counterintuitive. Instead, the relative merits of OSM and regional GIS data were compared to decide upon the information sources used for context creation. OSM is considered the leader of open-source mapping and has the advantage of content and updates contributed by a wide variety and number of users. Perhaps the most familiar analogue to this data source is Wikipedia, which is a useful comparison because the benefits and drawbacks of the resulting information is similar. Although generally accurate due to the number of contributing users, areas with less participation are generally more prone to inaccuracy. Additionally, due to the non-specialization of the average use in the field they are contributing to, often information is less detailed and without explanation of the data gathering methodology, which makes the resulting accuracy difficult to determine. Regional GIS data is generally considered to be more accurate, or at


least more methodically collected, due to oversight by a governing agency. Additionally, within the context of a single governing municipality, there is a consistency to the presentation of information that is missing from OSM data. Furthermore, additional information such as zoning designations and local demographics or statistics is usually present with regional GIS data. However, this method has the drawback of being limited to the boundaries of the governing agency which collects and curates such data. Despite the consistency of data that is desirable within a dataset, governing agencies generally follow their own labelling and collection standards, which leads to more deviation among the availability and quality of data across the country than is present in the OSM data. From these considerations, it was concluded that despite the drawbacks, the more detailed nature of data contained in regional GIS files was preferable for the automatic creation of context. Since methods of analysis tend towards highly specific solutions and outputs, accuracy is of the utmost importance and inaccurate results were

deemed more problematic than missing information. Due to this choice, Seattle was chosen as the single area for demonstration of the proposed tool and all further efforts to work with the regional GIS data provided are framed within this context. Ideally, when a new method for accessing Google Earth data is released this choice could be revisited to implement a method which is both highly accurate and wideranging. Expanded Control of Benchmark Form Generation Expanded control of form generation involved the identification of several variables previously embedded in the parametric logic of created scripts. This exercise helped to demonstrate just how much control over form can be embedded within scripting logic, demonstrating researched critiques concerning the authorship of design when using computational methods not created by the designer. Despite the correct nature of these critiques, it is apparent that if an awareness is maintained concerning these drawbacks their effects can be limited. The variables that demonstrated the greatest effect

78


over the control of form mainly concerned the resolution of the form being created. By adding form resolution as a controllable parameter, the same optimization routines can return a range of massing styles from highly angular to organic. User Massing, Manual Manipulation, and Comparison The completion of context creation and expanded control over benchmark forms took longer than expected. Therefore, the inclusion of user created massing, the ability to manually manipulate generated forms, and the form comparison aspects of prototype functionality were pushed into phase 5 of the project. Conceptual Budgeting Another important aspect of the decision-making progress that could be facilitated through computational methods is the approach to conceptual budgeting techniques. Research into this subject has revealed that there are generally two approaches to budgeting utilized in early stage design that could prove useful within the context of the proposed tool. Perhaps the most familiar of

79

these approaches is the creation of a budget estimate. At this early stage such an estimate is usually derived from a square footage cost deemed most relative from a similar project, usually within the range defined by the current RS Means data. According to Donald Parker “82% of all architectengineer (A-E) firms use this method to prepare budget estimates� (2014, pg 2). However, using this method the mean deviation of the low bid from the proposed budget was 29% (+16%, -13%) and the extreme deviation 66% (+38%, -28%) (2014, p2.). This amount of deviation is significant and according to the Freiman Curve, both underestimation and overestimation of project costs can lead to greater actual project expenditures (Phaubunjong 2002, p 2). From this information, it seems essential to employ more accurate methods of budgeting to guide the evaluation of alternatives during conceptual stages of design. Alternatively, another method of cost evaluation proposed for use in conceptual budgeting focuses on predicting revenue as a method of budget setting. These methods use program and rental rates, combined with a desired period of ROI, to


determine what a desirable target budget would be. This approach has been most successfully demonstrated in the Cashback 1.0 project (Gerber et al 2012). The use of these parametric systems seems most limited by the amount of information required to work perform necessary calculations. Working through these methods in the early stages of design would require several hours if performed manually on each conceptual iteration. This limitation makes these systems ideal for implementation within the proposed tool, as a way to provide instant feedback on the proposed iterations with more accuracy than conventional methods. Ideally, the parametric system created by Donald Parker would be used as the basis for implementation of such a system. In his research he identifies detailed relationships between many parameters of the building geometry and materiality to create an estimated construction cost as one aspect of the total project cost. Of these parameters he identifies several as the key cost drivers including: the square footage and occupancy type of functional areas; general building configuration

including number of floors, height, perimeter, and volume; preliminary design parameters of the structural, mechanical, plumbing, and electrical systems; inclusion of special security, communication, and site specific systems; geographic location; and project schedule. Although necessary for further development, the time to implement this system within the current iteration of the tool would be prohibitive. Therefore, for future development of the program, several of the most applicable considerations with regards to massing have been chosen for preliminary inclusion. Although the partial implementation of these considerations won’t give an accurate final number, they will provide an idea of the advantages that real-time cost comparison can provide and of the proportional effect that preliminary decisions can play on budget. If this system is fully implemented in future iterations of this project, it is likely that a more in-depth model will be developed with regards to the effect of geometry on final budget.

80


81

Architectural Programming Despite previous issues with the proposed computational methods for architectural programming, the use of such a generative method for complex programming situations remains a critical aspect of this project. The benefits of such an approach have been seen in practice in complex programmatic designs such as hospitals through the work of the firm ZGF (Boon et al 2015). The approach outlined in this work addresses similar problems as the previously attempted solutions (adjacency to internal and external elements), but overcomes many of the difficulties faced using physical simulation methods.

Testing and Comparison Methodology

In the example used to explore this method, the total distance was more than halved between the least and most fit solutions (Boon et al p 34). One of biggest drawbacks to this method is that even using an evolutionary solver (which only computes a fraction of the possible results based on random mutation), it took 8 hours to complete 62 generations (3100 iterations). By utilizing a mathematically based calculation rather than analysis of a geometric result, it is likely these times

At this stage, an interview was conducted with a local venture capitalist to determine what the next steps for development of this prototype might be, beyond the context of a thesis project. This was determined to be necessary for final analysis because the ultimate success of this mixture of computational and conventional methodologies would be determined by its value in real-world application. Among a number of insights regarding the nature of software development and start-up funding, this interview

could be improved upon. However, it would almost assuredly fall outside the real time results that have thus far defined the tool. Therefore, although a new method exists which could represent a great contribution to the completeness of the current functionality, there are still significant drawbacks to the inclusion of a method to calculate and optimize program. Like some of the other more complex operations already included, the resolution of analysis will play a large part in the usability of this aspect of the tool.


resulted in a determination that the prototype would need to demonstrate increases in user productivity and informational interpretation to have financial value. Previous research regarding architectural process and cognitive research focused on a mixture of observational testing and survey responses. Therefore, we chose to follow a similar methodology within our own testing and comparison of the prototype tool. Observation would allow for a better understanding of where users struggle to understand the expected use of the prototype tool or interpret visual representations of contextual data. Survey responses would allow for a more precise evaluation of differences in productivity and quality of information, to compare the prototype tool more thoroughly against conventional architectural processes. Based upon these concepts, a methodology of testing and associated survey was designed. Additional details about testing methodology, survey questions, and responses can be found in the appendices on page 135. Findings of this testing will be discussed in phase 5 of this chapter.

Presentation At this time, presentation of the material focused on a demonstrated of updated functionality and a discussion of the proposed testing and comparison methodologies. Physical boards were combined with a live demonstration of the prototype to illustrate this progress. Further details regarding this presentation can be found in the appendices on page 152.

Feedback and Response From this presentation the majority of feedback affirmed that plans for testing and comparison of the prototype tool were a solid direction for development. Beyond this aspect though, there were many concerns about how to present the entirety of this project in such a short time and to a wide variety of audiences. Plans for demonstrating the desired information mostly included the identification of key aspects of research for verbal presentation and the diagrammatic representation of all information in an easily understandable format. The statement of these challenges is certainly accurate and results will be provided in the final boards for phase 5. These boards can be found on page 152.

82


4.5 Phase Five Based on feedback from phase 4 presentations, this phase was primarily concerned with testing and final modifications of the prototype tool not completed during phase 4. These modifications included the ability to analyze user creating massing, methods for manual manipulation of generated forms, and comparison of both generated and manually created forms within the prototype tool.

results can be found in the appendices on page 135.

Initial Testing

Additionally, users desired the ability to manipulate forms generated by the program. In some cases, it was discovered that users became quickly frustrated with the options presented by the program if trying to achieve a specific architectural detail.

Testing was conducted in two phases. Initial testing involved demonstration to a class of students and a following session of student interaction with the prototype tool, followed by a written survey. This phase of testing took place over two hours. The prototype tool was demonstrated to nine students. All nine continued on to interact with the prototype and participated in a verbal feedback session immediately following the interaction session. Written surveys were returned by three of the students. The surveys and detailed

83

Feedback from initial testing demonstrated a need to import user generated forms as one of the most essential aspects for final development. Without the option to import and examine their own forms within the context of the program, testers felt limited.

Finally, a discussion of the pros and cons of form generation by the prototytpe tool was held. While some users felt this was an important aspect of the tool, others felt that form generation was uncessarily constraining.


Restructuring Following initial testing, restructuring of the Grasshopper script was undertaken to make it more accessible for an outside observer to understand. Additionally, a variety of components that required the installation of additional plug-ins for grasshopper were replaced. This reduced the number of dependencies from eight to five. Two additional dependencies could be removed from this list with more time. To achieve this reduction, three basic scripts were written using C# and VB languages. This restructuring took several days but resulted in a script that was better organized, more readable, and less dependent on outside resources. During this time a number of repeating aspects of the script were clustered to become custom components with discrete functions. Additionally, underdeveloped aspects of the work were examined for any similar clusters that could be extracted. This resulted in the identification of nine unique components that were created during the scripting of this prototype tool including list. More details concerning these components

can be found in the appendices on page 133. After restructuring of the prototype tool was completed, additional scripts were devised to implement the analysis of user created massing, methods for manual manipulation of generated forms, and the comparison of both generated and manually created forms within the prototype tool. However, it was discovered that to implement this scripting throughout the program required a large amount of data management and manipulation within the current structure of the script. Therefore, these options were implemented at specific point for demonstration within the current prototype tool. Full implementation of these features will require another major restructuring of the script and will be attempted after final presentation of the project. In regards to the parametric scripting aspects of this project, a large number of obstacles were found to development that were previously not encountered during the creation of parametric scripting for specific

84


architectural projects. Achieving the general applicability of the scripts required for this project was more difficult than anticipated, but ultimately quite rewarding.

Final Testing Final testing of the prototype tool involved demonstration of the tool to architects, developers, and professors. A set of two surveys were created to evaluate responses to this project. The first was a predemonstration questionnaire aimed at evaluating familiarity with existing computational tools and gathering more detailed information regarding time spent on different aspects of the predesign and schematic design process. The purpose of this survey was to evaluate if any trends emerged in regards to perceptions of the prototype tool in relationship to familiarity with existing tools. Additionally, a more detailed breakdown of time spent on aspects of design within larger categories would contribute to a better understanding of how useful increases in productivity in each area might be.

85

A post-demonstration questionnaire was created to evaluate subjective perceptions of productivity and ease of use in comparison to conventional processes used. In addition to the responses to these surveys, verbal feedback and observations were recorded for each evaluation. Thirteen people were contacted for initial testing. These testers were selected from architects that the authors knew either directly or indirectly. The sample represented a mixture of architects with both high and low levels of technical proficiency, as well as high and low levels of architectural experience. Of these, five agreed to participate in testing. Initial expectations were that after a short presentation, control would be handed over to the testers for direct interaction. However, when this was attempted in the first test, the total time increased from an expected 30-45 minutes to 75 minutes. Since most respondents couldn’t allocate this amount of time to testing, further tests were conducted through demonstration only, rather


than with direct user interaction. Of the five participants, all participated in verbal feedback session directly following the demonstration and returned the written survey provided. The surveys and detailed results can be found in the appendices on page 135. Feedback was generally positive, with all participants pleased by the ability to easily perform basic analysis and information display in an easy to use and rapid manner. However, each participant had difficulty understanding how the prototype tool

would be applied within their own workflow. Each of the participants saw potential for use of the prototype within their own workflow if tailored to meet their own needs.

Comparisons With testing completed, the much needed comparison to conventional processes was finally achievable. By diagramming verbal and written survey responses regarding the required workflow, productivity differences, and information quality differences of each participant, it became possible to demonstrate the findings of this thesis project.

Figure 4.1 Productivity and Intuitiveness Subjective Evaluation

86


In all cases, productivity of the prototype tool was perceived as being as good or better than conventional processes. Context creation (+2 average), massing generation (+1.7 average) and construction and occupancy adjustment (+1.7 average) were deemed to be the greatest increases in productivity over conventional processes, whereas environmental information (+1 average) and zoning massing (+1.3 average) showed the smallest increases in productivity. In terms of ease of use, context creation (+1.7 average) and environmental information (+1.3 average) were deemed the most intuitive, whereas zoning massing (+.3 average) and construction and occupancy (+1 average) were seen as the least intuitive aspects. Overall, use of the prototype tool was perceived to be 33% faster than conventional methods on average. These results are illustrated in figure 4.1. These preliminary results suggest that the prototype tool is more productive and intuitive than conventional methods overall, but determining these differences with more accuracy will be critical to

87

demonstrating a benefit to its use. Savings and benefits of the tool will need to outweigh its costs and the time necessary to integrate a new tool into current workflows.

Presentation At this phase, presentation was concerned with demonstrating the essential aspects of research, prototyping, testing, and suggestions for future development in an understandable and concise manner. The final presentation attempted to address the previously expressed desires for a visual and verbal presentation that is readily understandable and address a variety of audiences at different levels of detail for each aspect of the project. Further details regarding this presentation can be found on page 156 in the appendices.

Feedback and Response Feedback from the final presentation was generally positive, with most viewers understanding the direction that current development of the prototype suggests. Questions were posed regarding of the role of architects and their relationship with digital tools- a theme that was often discussed


throughout the development of this project. However, it was surprising that many of the attendees expressed a desire to see more work regarding the form generating aspects of the prototype tool. Throughout the development of this project, users have either been excited or opposed to this aspect. This division has demonstrated a clear choice between form generation and analysis that must be made in any future development.

88


Chapter Five : Conclusion

89


The thesis statement of this project was that:

Parametric scripting should be undertaken in a way that avoids embedded logic as much as possible, as a way to maintain authorship

Through analysis of research efforts, applied theory in architecture and software engineering, and a focus on architecture specific interactions, it is feasible to create a prototype interface that uses generative exploration to enable more intuitive and productive interaction between architect and computer during the pre-design and schematic design stages of the architectural process. Three core goals were identified in early research that would serve to facilitate the exploration and achievement of this statement including: Generative exploration of the design

of the designer using the prototype tool.

The following text will examine the prototype tool development and testing to explore the outcomes in regards to the thesis statement and goals listed above.

Summarize One of the initial premises of this project was an emphasis on purely objective aspects of the design process. It is important to note that despite this focus, subjective aspects of the design process have a large impact on final design choice. For this reason, this project is best understood as one half of a conversation between the designer and proposed tool.

space should be used to create iterations for comparison against and inspiration of conventionally created designs. The program should focus on creativity support through an intuitive user interface that allows for rapid iteration and understanding of design information.

It is proposed that by understanding and explicitly stating the objective results of design decisions, that a more honest conversation about aesthetics and function can be held between the client and designer. In this way, the aesthetics and personal preferences inherent in the design process can be weighed in an informed

90


manner. It is expected that the product of the objective and explicit process employed by use of this computational tool will be interpreted and modified to meet the desires of the client. In this sense, it is important to note that this tool is primarily concerned with facilitating an informed conversation about design options through the creation of design targets which represent what is possible within a specific context. As previously stated, this thesis was concerned with exploring the intersection of computational and conventional design methodologies. In this context, the used and perception of the prototype tool is equally as important as its functionality. Rather than merely speaking of an end product, this thesis is about the development of a hybrid process. Towards this end, further discussions regarding the results of research, process, and testing are necessary to fully understand any conclusions or directions for future development of this project. It is only in this context that we may better understand how the creation of this prototype tool contributes to a theoretical scope of

91

work that expands far beyond the level of development achieved in the current stage of work.

Observe Research Initial research into this thesis cast a rather broad net in an attempt to discover what was relevant to the idea of exploration and generative prototyping in the early stages of design. The range of topics facilitated a broad understanding of the subject matter. This broad understanding is important because tool-building is as much about understanding people as it is about software. Future research into this subject should continue to contribute to this broad understanding, especially exploring the ways in which subjective and objective concerns influence one another within the architectural design process. Additional research should include in-depth exploration of user interface design, the use and control of generative massing, and computational strategies to further enhance speed and accuracy of analysis.


Although research of user interface methods were successful in providing guiding concepts for initial development, this aspect appears currently underdeveloped. To truly specify the effects of consistency, economy, and communication posited as primary concerns, research should be gathered concerning the effect different strategies to achieve these goals have on the speed of workflow within existing interfaces. The generation of data driven massing is another area of the envisioned prototype that was woefully underdeveloped in comparison to initial expectations of the prototype tool. This occurred due to issues with suggesting any single strategy for the creation of benchmark forms. Further research would identify multiple strategies for each element of analysis according to the most current theories of design. With this strategy the creation of benchmark forms could become useful as a way of quickly exploring the possible results of prevailing design theories within a specific context. In the context of the generated massing, further research should also examine the effect that

each variable has upon variability of the form to determine which aspects should be directly controlled and which aspects can remain embedded in the logic of the script. Finally, the computational methodology focusing on mathematical analysis of context variables prior to the creation of geometry shows promise as a starting point for future research. The commonly used method is to create geometry, then analyze and optimize it with an evolutionary solver. Upon initial comparison, the methods adopted in the creation of this prototype tool appeared to significantly speed efforts to create a roughly optimal form. However, demonstration of this premise will require more rigorous testing and research. It is possible too, that the implementation of these methods within a custom C# or VB script could further speed their operation in comparison to visual scripting methods. Process Overall, the process of completing a group thesis project was highly beneficial for learning how to organize and direct the collaborative

92


work required in a successful architectural practice. Beyond the aspects typically required of a thesis project, this project demanded a level of communication and management not generally experienced by graduating students. In this manner, the successful completion of a group thesis project provided an education in excess of the scope of the work produced. More specifically, there are a few lessons that may be learned from this process to benefit other group projects and guide future development of the prototype tool. At the most productive times work was divided between group members, which allowed each person to extensively experience their particular focus at the moment. This allowed for a depth that is not commonly possible in tacking such a broad topic. This division worked best when playing to the strengths of each team member, allowing for the highest levels of work in every area of the project. These experiences suggest that ideally, each team member should possess a distinct and complimentary set of skills with as little overlap as possible. In this way specialization and collaboration can

93

produce work beyond that achieved by any individual. Despite this need for division and specialization, group brainstorming and review sessions proved to be invaluable. These sessions guided the expansion and integration of group work so that although each aspect was composed with a single voice, their products were ultimately complimentary to one another. At its best, systems used in one area of development were mirrored in the others, allowing for the construction of a common language of design across different aspects of the project. Future work should make use of this phenomenon by establishing an evolving collection of thematic devices to serve as guidelines throughout the work. To achieve full realization of this idea, each team member must understand the goal of this process and become an enthusiastic participant in its realization. These ideas expand the core goals detailed in the thesis statement and apply them beyond the prototype tool, to improve the process of


developing that tool as well. They seek to make explicit the embedded logic of each team member’s work for interpretation and use by the rest of the group. Additionally, they seek to apply the types of collaborative and information rich processes envisioned for the prototype tool to the processes used in its development. Testing Perhaps the most interesting aspect of this project is its potential for future development. During testing, one of the most repeated comments was that despite liking the amount and display of information in a single tool, it was difficult to envision how it might be used in practice. Therefore, initial testing for the prototype tool was successful in that it explored the opinions of many possible users to determine a discrete set of target markets and uses for the prototype tool. By testing this prototype on a variety of possible users, 3 groups emerged: Architects offering pre-design services in addition to the standard architectural services.

Developers interested in a rigorous suite of tools for early stage site and financial analysis. Educators interested in computational methodologies for case study exploration.

Architects saw promise for the tool as part of pre-design services but felt that expansion was necessary to address schematic design concerns. Additionally, they were unsure that one suite of tools that spanned both predesign and schematic design services would be commonly used. Students were excited about the possibilities for rapid exploration that it presented and educators felt that the tool could be invaluable for teaching if it was expanded for case study evaluation. However, there were concerns about the reliance that students might have on pre-generated forms. Developers were intrigued by the ability to rapidly explore and iterate through basic options, but wanted to see a greater emphasis upon the financial implications of early decisions. Additionally, the inclusion

94


of expected payback and features to address concerns regarding the effect of the building on neighbors was seen as an important aspect for development. Future studies should thoroughly examine the premise of generative form creation in the context of each type of use. This aspect of the project was perhaps the most controversial in testing. Some users found the generated forms constraining, while others saw them as an extremely useful aspect of the tool. Those who found them useful stated that they interpreted them as an architectural representation of context information, which was the intended effect. This difference might be attributable to the comfort of the user in reading intent from form without assigning the given form importance or permanence. Future testing should thoroughly examine the effect of these generative forms on the workflow of each user group, to determine the limiting or expanding effects of their creation upon each user group.

95

It became apparent that questions of authorship are more important to some user groups than others. For example, developers stated that more convenient methods of interaction through linking certain variables would be preferable over complete control of form. Conversely, educators stated that massing options should be removed entirely and that the prototype tool would be most useful as an analysis suite. These results demonstrated the need for a more focused approach to determining the needs and desires of each user group. The most primary area for future development in terms of testing would involve an interview and survey process with multiple users from each of these groups to determine how this tool would be used if tailored for that group. These studies should be conducted before any future development of the prototype tool. Additionally, it was apparent in testing that users were hesitant to quantify the time difference between working within the prototype tool and their own workflow. The authors speculate that this hesitancy comes


from both the lack of a specific understanding of how long each aspect of their own process takes, as well as a difference in the time commitments from project to project. Ideally, testing of a revised prototype should take place side by side with an actual project. By adjusting the testing methods, a more useful comparison can be made in terms of the productivity and quality of information resulting from each method. However, this goal may be difficult to achieve in real-world conditions. Finally, previous speculation included an idea that this tool would be more useful for the novice architect than the experienced architect. Therefore, more detailed future testing should attempt to isolate this variable and examine the desireability of this tool for these two groups. Current results do not provide enough information to begin to speculate on the accuracy of this previous speculation.

Suggest From this information, it seems reasonable to conclude that the project was successful in exploring the blending of computational and

conventional process methodologies. However, it has yet to achieve the intuitive qualities and increase in productivity hypothesized in the initial thesis. Testing demonstrated promise in these areas, but also demonstrated difficulties in accurately determining the outcome without more specificity in regards to project type and user. To achieve these goals in future development would require restructuring of research, process, and testing methodologies based on lessons learned and as outlined above. With these adjustments, a tailored approach will have more success in accurately prototyping and testing to examine feasibility of the initial thesis. Towards this end, the first stage of any further development would be primarily concerned with the accurate identification a target user and detailed research into the needs of that user base. Development could be tailored to serve the needs of any of the four user groups mentioned above relatively quickly, but each necessitates expansion and detailing of the program in a different direction. However, the authors speculate that the architect as developer would have a use for all of

96


the areas for expansion. Due to this, a plan for development of the tool can be proposed that encompasses the creation of four tailored products under a single umbrella. Initial development of the tool for developers would serve to round out the existing aspects of the tool, requiring further development in the areas of data collection, financial implications, and impact of the building on context. At this point a basic version of the tool could be marketed on a subscription basis for developers. At this point development for architects could continue, with a focus on tools to increase productivity during the schematic design phase. More detailed aspects of the building such as exiting and program could be developed to assist with standard architectural services during schematic design. With this level of detail achieved, the program could be tailored towards a case study analysis and diagramming tool for use in an educational setting. Further research would be essential towards

97

determining the ideal balance between desires of educators and students in this stage. Throughout this process the tool could be provided as a full suite encompassing all elements of the previous three iterations and tested in the niche market of architect as developer. Additional considerations and layers of data manipulation could be provided in this version to meet both the needs of these users and fulfill the vision of how this tool can impact the design of the built environment.


This tool was envisioned as a vehicle to facilitate informed, responsible, and lasting design solutions that leverage information to create fiscally responsible beauty. Regardless of the outcomes of this study, the loss of those ideals in its continuation would be ultimately counted as failure. To this end, the implications of any future development should be carefully balanced against the desire to continue this project. Although this experiment has demonstrated that merging computational and conventional methods in this manner shows promise for increasing understanding of information and productivity in design, the ways this knowledge is applied is of the utmost importance. Architects are tasked with ensuring the health, safety, and welfare of the general public. However, it is those who look beyond basic requirements that create a brighter and better world. This experiment in toolmaking was rooted in a desire to approach the extraordinary, but the product of any work is forever rooted in strength of character, concern for others, and a willingness to imagine the improbable.

98


References

99


N.A. (2015). Grasshopper - an Overview | The Grasshopper Primer (EN). Retrieved November 21, 2015, from http://grasshopperprimer.com/en/0-about/1-grasshopperan-overview.html. American, I. O. A. (2013). Architect’s Handbook of Professional Practice (15). Somerset, US: Wiley. Retrieved from http://www.ebrary.com Attia, E. (2009). Patent Identifier No. 0234696. United States: Engineered Architecture. Beck, K. (1999). Extreme Programming Explained: Embrace Change. Boston. AddisonWesley. Bradner, E., Iorio, F., & Davis, M. (2014). Parameters tell the design story: ideation and abstraction in design optimization. Proceedings of the Symposium on Simulation for Architecture & Urban Design. Society for Computer Simulation International. Boon, C., Griffin, C., Papaefthimious, N., Ross, J., & Storey, K. (2015). Optimizing Spatial Adjacencies Using Evolutionary Parametric Tools. Future of Architectural Research: Architectural Research Centers Consortium 2015 Conference. Perkins + Will Research Journal. Retrieved from https://perkinswill.com/sites/default/files/ID%203_PWRJ_ Vol0702_02_Optimizing%20Spatial%20Adjacencies%20Using%20Evolutionary%20 Parametric%20Tools.pdf Carlile, J. (2014). KeenCon 2014: Using Data to Improve the Built Environment. Retrieved from https://vimeo.com/107291814 Cross, Nigel (2004). Expertise in design: an overview. Design Studies, 25(5) pp. 427–441. Retrieved January 17, 2016, from http://oro.open.ac.uk/3271/1/expertise_overview.pdf Czerwinski, M. (2006). Creativity Support Tools - UMD Department of Computer Science. Retrieved from http://www.cs.umd.edu/hcil/CST/creativitybook_final.pdf.

100


Davis, D. (2013). Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture. Doctoral Dissertation. RMIT University. Melbourne, Australia. Delos Living LLC. (2015). The Well Building Standard. New York, NY: International Well Building Institute. Dubberly, H. (2001). Alan Cooper and the Goal Directed Design Process, 1(2). Eastman, Charles. “General Purpose Building Description Systems�, COMPUTER AIDED DESIGN, 8:1 (January, 1976c) pp. 17-26. Eastman, C., Teicholz, P., & Sacks, R. (2011). BIM Handbook : A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors (2nd Edition). Hoboken, NJ, USA: John Wiley & Sons. Retrieved from http://www. ebrary.com Estkowski, T. (2013). Towards a Generative Design System Based on Evolutionary Computing. Oslo: AHO. Flux: Sustainable Architecture at Scale (2015). In Flux Factory, INC. From https://flux.io Gerber, D., Elsheikh, M., and Solmaz, A. (2012). Associative Parametric Design and Financial Optimization- Cash Back 1.0. Beyond Codes and Pixels: Preceedings of the 17th International Conference on Computer-Aided Architectural Design Research in Asia. Hong Kong. Retrieved from http://www.academia.edu/13055334/Associative_ Parametric_Design_and_Financial_Optimization-_CASH_BACK_1.0 Hayes, R. (2013). Architect's Handbook of Professional Practice (15th Edition). Somerset, NJ, USA: John Wiley & Sons. Retrieved from http://www.ebrary.com Hewett, T., Czerwinkski, M., Terry, M., Nunamaker, J., Candy, L., Kules, B., et al. (2005). Creativity support tool evaluation methods and metrics. NSF Workshop on Creativity Support Tools.

101


Hogrefe, A. (2010). Evaluating the Digital Design Process: Bottom-up vs. Top-down. Masters Thesis. Miami University. Oxford, Ohio. Ingels, B. (2015). HOT TO COLD: An Odyssey of Architectural Adaptation(p. 319 and p. 655). Taschen. Ingles, B. (Jul 2009). 3 Warp-Speed Architecture Tales [Video file]. Retrieved from https://www.ted.com/talks/bjarke_ingels_3_warp_speed_architecture_ tales?language=en Kasparov, G. (2010). The chess master and the computer. The New York Review of Books, 57(2), 16-19 Krish, S. (2010). What is Generative Design. Generative Design. Retrieved from https:// generativedesign.wordpress.com/2011/01/29/what-is-generative-desing/ Lawson, B. (1997). How Designers Think: The Design Process Demystified (3rd ed.). Boston, MA: Architectural Press. Lichtman, M. (2013). Architect Eli Attia: Google stole my life’s work. Globes. Retrieved November 5, 2015 from http://www.globes.co.il/en/article-1000889578 Logan, R. (2005). Lone Cypress Tree Monterey, CA [Online Image]. Retrieved November 5, 2015 from https://commons.wikimedia.org/wiki/File:Lone_cypress_tree_ Monterey_CA_photo_D_Ramey_Logan.jpg Larson, K., Tapia, M., and Duarte, J. (2001) “A New Epoch”. A+U. Marvin J. Malecha, Meredith Davis, Patrick FitzGerald, Santiago Piedrafita, Art Rice, and Paul Tesar. Edited by Ryan Hargrove. “Design Thinking in the Design Disciplines” North Carolina State University College of Design. Raleigh, NC. 2008 Mattson, C. (2014). What is Design Exploration? | BYU Design Research Group.

102


Retrieved November 20, 2015, from http://design.byu.edu/blog/what-designexploration-0. Norvalis. (2005). Dried Coriander Seeds [Online Image]. Retrieved November 5, 2015 from https://commons.wikimedia.org/wiki/File:Coriander.png Parker, D. E. (2014). Parametric Cost Modeling for Buildings. Florence, GB: Routledge. Retrieved from http://www.ebrary.com Parthenios, P. (2005). Conceptual Design Tools for Architects. Doctoral Dissertation. Harvard Design School. Cambridge, Massachusetts. Phaobunjong, Kan. (2002). Parametric Cost Estimating Model for Conceptual Cost Estimating of Building Construction Projects. Doctor of Philosophy Dissertation. University of Texas Austin. Retrieved from https://utexas-ir.tdl.org/bitstream/ handle/2152/845/phaobunjongk022.pdf?sequence=2&isAllowed=y Practice of Architecture Defined - BPC Section 5500.1. (n.d.). Retrieved November 20, 2015, from http://www.cab.ca.gov/apa/bpc/division_3/chapter_3/article_1/ section_5500.1.shtml Project Dreamcatcher (2015). In Autodesk INC. Research. From http:// autodeskresearch.com/projects/dreamcatcher Regnier, B. (2015, October 24). Practice Computational Process in Architecture and Design. Lecture presented as Panel Discussion. Ware, C. (2010, July 27). Visual thinking: For design. Morgan Kaufmann. Wickens, C.D., Vincow, M. and Yeh, M. (2005). “Design Applications of Visual Spatial Thinking: The Importance of Frame of Reference�. Handbook of Visual Spatial Thinking. Cambridge University Press

103


Risselada, M., Loos, A. Le Corbusier, and Beek, J. (2008). Raumplan Versus Plan Libre: Adolf Loos [and] Le Corbusier. 010 Publishers. Sass, L. (2000). Reconstructing Palladio’s Villas: An Analysis of Palladio’s Villa Design and Construction Process. Doctorate Thesis. Massachusetts Institute of Technology. Simons, D.J., and Rensink, R.A. (2005). Change Blindness: Past, Present, and Future. Trends in Cognitive Sciences. Vol 9 No 1. Retrieved March 5, 2016 from https://www. researchgate.net/profile/Daniel_Simons2/publication/8092768_Change_blindness_ Past_present_and_future/links/0912f50c5db366d6be000000.pdf Smith, Rick. (2007). Technical Notes From Experiences and Studies in Using Parametric and BIM Architectural Software. Virtual Build Technologies. Retrieved November 10, 2015 from http://www.vbtllc.com/images/VBTTechnicalNotes.pdf Snider, J., Lee, D., Poizner, H., and Gepshtein, S. (2015) Prospective Optimization with Limited Resources. PLOS Computational Biology. 11(9). Retrieved March 3, 2016, from https://www.researchgate.net/publication/281778890_Prospective_Optimization_ with_Limited_Resources Standish Group. (2012). The CHAOS Report 2012. White-paper. Boston. Tanney, J. and Luntz, R. (2013) The Modern Modular: Prefab Houses of Resolution: 4 Architecture. United States: Princeton Architectural Press. Thompson, C. (2013). Smarter Than You Think: How Technology is Changing our Minds for the Better. Penguin. (2014). Project Dreamcatcher - Projects - Autodesk Research. Retrieved November 21, 2015, from http://autodeskresearch.com/projects/dreamcatcher.

104


Glossary of Terms

105


General Terms Building Information Management (BIM) - tools, processes and technologies that are facilitated by digital, machine—readable documentation about a building, its performance, its planning, its construction and later its operation. (Eastman 2008, p. 586) Constraints- Limitations imposed on design by regulation, material or structural considerations, or design goals. Design Analysis- The process of undergoing performative analysis of a detailed design through computational programs that evaluate specific metrics such as glare, thermal gain or loss, and daylighting availability. This process is usually followed by manual adjustment of the building design to improve operational performance. Designing-in Performance- The process of identifying design goals to impose additional constraints which guide development. This method has the goal of including operational and other performative questions early in the design process. Design Problem- This is the question or set of questions that drives design development throughout the process. It prioritizes the elements of the design for a loose tradeoff analysis that the architect indirectly performs through spatial exploration. Design Optimization - The process of approaching a design problem through the formulation of relationships, followed by convergence towards a single ideal solution driven by analysis (Mattson, 2014) Design Exploration - the process of approaching a design problem with the assumption that the optimal design solution is initially unknown and uncharacterized, with clarification of the optimal solution emerging from a convergent/divergent cycle of exploration and optimization (Mattson, 2014)

106


Information- Quantitative input concerning Knowledge- The collective understanding of an individual, gained through experience or education in a subject. Tool- An object or computational routine used to enable the improvement or automation of a task performed by a human user. Architect- “A person who is licensed to design buildings, prepare and issue construction documents, and administer construction contracts. In all states, either the term “architect” or the term “architecture” is statutorily defined.” (Hayes et al 2013, p. 1118) Architecture- “The art and science of conceiving or executing building programs, in particular the practice of designing buildings and administering contracts for their construction.” (Hayes et al 2013, p. 1118) Computer-aided design (CAD)- “A term applied to systems or techniques for design and drafting using integrated computer hardware and software systems to produce graphic images.” (Hayes et al 2013, p. 1121) Construction documents (CD’s)- “Drawings and specifications prepared by the architect setting forth the requirements for the construction of the project.” (Hayes et al 2013, p. 1122) Design development documents- “Drawings and other documentation that fix and describe the size and character of the entire project with respect to architectural, structural, mechanical, and electrical systems; materials; and other elements as may be appropriate.” (Hayes et al 2013, p. 1124) Design development services- “Services in which the architect prepares the design development documents from the approved schematic design studies, for submission to the owner for the owner’s approval.” (Hayes et al 2013, p. 1124)

107


Integrated project delivery (IPD)- “A project delivery approach that integrates people, systems, business structures, and practices into a process that collaboratively harnesses the talents and insights of all participants to optimize project results, increase value to the owner, reduce waste, and maximize efficiency through all phases of design, fabrication, and construction.” (Hayes et al 2013, p. 1127) Interoperability- “The ability for software applications to exchange information directly through open industry standards. This capability supports effective collaboration between project participants.” (Hayes et al 2013, p. 1128) Parti- “A scheme or concept for the design of a building” (Hayes et al 2013, p. 1131) Preliminary drawings- “Drawings prepared during the early stages of the design of a project.” (Hayes et al 2013, p. 1132) Program (architectural or facilities)- “A written statement setting forth design objectives, constraints, and criteria for a project, including space requirements and relationships, flexibility and expandability, special equipment and systems, and site requirements.” (Hayes et al 2013, p. 1133) Programming- “Typically, the foremost area of analysis for any project... (which) identifies the type and number of spaces within a building as well as the requirements for each space... In addition, the building program describes the functional adjacencies... Ultimately, to be effective, both the general building type and the specific building requirements must be fully understood before the design process can advance.” (Hayes et al 2013, p. 657) Schematic design- “Services in which the architect consults with the owner to ascertain the requirements of the building project and prepares schematic design studies consisting of drawings and other documents illustrating the scale and relationships of the building components for approval by the owner. The architect also submits to the owner a preliminary estimate of construction cost based on current area, volume, or similar conceptual estimating techniques.” (Hayes et al 2013, p. 1135)

108


Design- “A unique analytical process that involves two fundamental procedures: understanding a project’s multiple parameters and synthesizing these parameters into a holistic strategy... It is a rigorous, methodical process of inquiry and invention.” (Hayes et al 2013, p. 657) Generative logic- “The “vision” or the “concept” that guides and directs the design process. In short, it is a core set of architectural values that result from the established goals and prioritized analysis. These values are used to judge subsequent alternatives and determine which is superior and worthy of further refinement.” (Hayes et al 2013, p. 660) Iteration- “The act of solving and resolving a problem (through which) designers explore, develop, and document their concepts.” This process is important because the full range of possible solutions can only be understood by generation and evaluation of alternatives. (Hayes et al 2013, p. 660) Evaluation- A s process in which iterations are “judged relative to the established goals and generative logic.” (Hayes et al 2013, p. 660) Selection- A process in which the best design comes forth as “most consistent with the goals and logic created for the project... result(ing) in greater unity of expression and purpose. At its best, this process yields a result that appears simple, almost inevitable as if it couldn’t have been any other way. ” (Hayes et al 2013, p. 660) Design process- “Comprise(d) of multiple waves of iteration and evaluation. Alternatives are generated. These alternatives are evaluated, and the most effective - those most consistent with the established logic and goals - advance. In turn, this evaluation leads to new, more refined alternatives being generated, evaluated, and advancing. Through this cyclical process, the solution is increasingly refined and improved.” (Hayes et al 2013, p. 660)

109


Synthesis- “Drawing together the analysis and ultimately identifying the most applicable resulting strategy for exploration and refinement... (based on) a foundation of careful analysis and understanding.” (Hayes et al 2013, p. 659) Prioritization- A process which “project data (is organized) in a way that is consistent with the es. Ideally, this prioritization provides focus and greater clarity.” (Hayes et al 2013, p. 659)

Framework Process- The conceptual approach to completion of a task, concerned with how we conceive of doing things. Method- The work done to complete a task, concerned with how we do things. Analysis- The approach to evaluation of a task, concerned with how we judge the outcome.

Actors Designer- An individual that is involved with the production of a final product. In the context of this thesis it will either refer to the conception of an individual involved in multidisciplinary design of both computational and building systems towards a single final goal or be preceded by a specifying descriptor such as architecture or software. This definition provides an important distinction between those that conceive of themselves as designers that specialize in one area as compared to designers that see all projects as feeding into one another, attempting to design the world as a cohesive ecosystem (what we do). Software Designer- A designer of software.

110


Architectural Designers- A designer of buildings. Although this term can refer to individuals performing tasks that range from conceptual visualization to construction administration, this thesis is mainly concerned with its application within the stages of conceptual design. In this thesis the architectural designer or (architect) will be mostly concerned with designers that fall into the category of being both specialized in building design and who use conventional processes.

Software Development Process Goal-directed Design- A design method used in software development that emphasizes the separation of conception and programming to ensure final products are consistent with the needs of users. Cognitive Augmentation- The idea that tools can be used to enhance the capabilities of human processing. Used most often in relation to advanced chess competition that allows for play with teams of computer and human. Bottom-Up Method – The process is a combination and piecing together of smaller components to create a grander, more elaborate system (Hogrefe, 2010) Top-Down Method- The process begins with an initial parti or big idea, where it is successively rationalized and refined through progressive steps (Hogrefe, 2010)

Software Development Methods Visual Programming- Element based programming becoming popular in architectural workflows through interfaces such as Grasshopper or Dynamo. Hard-code Programming- Text based programming using languages such as Python, C++, or Java.

111


Architecture Development Processes Architecture Process- The process by which conventional architecture firms engage with design, with special focus on the stages before detailed design development. Systematic Design- Design that focuses on the systematic application of constraints within a specified framework to arrive at a final conceptual product for detailed development.

Architecture Development Methods Parametric Definition- The creation of a series of relationships using programming methods to outline a limited number of possibilities that are suitable for the final product Generative Exploration- A method of design that utilizes computational processes to analyze a large number of possible solutions before arriving at a final outcome. Constraint-based Design- A method of architecture design identified by case studies into the work of BIG, where constraints imposed by the site and climate are the primary driver of design decisions to create a unique expression of optimized function. One example of this method is the Beach and Howe tower by BIG. Concept-based Design- A method of architecture design identified by case studies into the work of BIG, where the form of the project is the primary driver of design decisions to emphasize the visual representation of a metaphorical relationship. Examples of this method include the Reichtag addition by Foster and Partners and Meca by BIG

112


List of Figures

113


Page Figure 1.1 Architectural Justification Categories (Design Thinking 2015, p 80)

04

Figure 1.2 Comparison between AIA and James Cummings of architecture process

08

Figure 1.3 Illustration of the conceptualization process (Hayes 2013)

10

Figure 1.4 MacLeamy’s Curve (Davis 2013)

21

Figure 1.5 Boehms Curve (Davis 2013)

22

Figure 2.1 Theoretical Framework Diagram

26

Figure 3.1 Cyclical Design Process

37

Figure 3.2 Major actions of the design process

37

Figure 3.3 Conceptual tools of goal setting and prioritiztion

38

Figure 3.4 Conceptual tools of goal setting and prioritization drive creation of the generative logic

38

Figure 3.5 Influence of generative logic on the design process actions of generation an evaluation

39

Figure 3.6 Influence of design process actions on the conceptual tools of goal setting and prioritization

40

Figure 3.7 The singular mental process containing the discrete actions of generation and analysis

41

114


115

Figure 3.8 The importance of rapid alternation between generation and evaluation during which the thought process occurs

43

Figure 3.9 Cognitive Limitation in Problem Solving

45

Figure 3.10 Novice Organization and Structure

46

Figure 3.11 Visual Perception - Blindness

48

Figure 3.12 Opportunistic Behavior in Design - Limitation on Restructure Logic

50

Figure 3.13 User Interface - Side Bar Concept

52

Figure 3.14 User Interface - Organization and sub categories

53

Figure 3.15 User Interface - Display within 3D environment

55

Figure 3.16 User Interface - Highlighting the currently relevant information

56

Figure 3.17 User Interface - Changes in form

57

Figure 3.18 User Interface - Goals and Priorities Selection

58

Figure 3.19 User Interface - Diagrammatic representation of the resulting generative logic

59

Figure 3.20 Combination of ideal configurations

60

Figure 4.1 Productivity and Intuitiveness Subjective Evaluation

86


116


Appendices

117


A.1 Existing Programs Grasshopper As architects engage more with parametric and generative design, it is important to understand the forces at play that prevent many practitioners fully exploring the possibilities of visual programming. David Rutten developed the visual programming interface Grasshopper as an extension of the parametric history function in Rhino, to allow for more explicit definition of geometric relationships (N.A. 2015). Possibly the greatest result of this development though, was the increased access to parametric definition through interface abstraction of the text-based code previously required. Although a leap forward, the understanding necessary to engage with the most interesting aspects of visual programming still require a large and specialized set of knowledge that many architects simply can not or do not care to acquire. In testing and research of project Dreamcatcher, Bradner et al concluded that a majority of architects surveyed believe that the complexity and level of abstraction demanded by

visual programming transforms the practice of architecture away from designing (2014).

Dreamcatcher Introduction Taking note of the benefits that constraints can have on design; the software company Autodesk has been attempting to facilitate the workflow that many architectural firms use through modeling, drafting, and BIM software. The research product Dreamcatcher sets itself apart by utilizing the method they term goaldirected design. The research team at Autodesk claims that, “The main distinction between a goal-directed design approach and the typical approach today is that goal-directed design is a 180-degree flip: rather than coming up with a solution and describing it geometrically like you might in some of our tools today, the goal-directed design approach starts with the definition of a problem� (Project Dreamcatcher, 2014). Clearly defining the problem at the early stages of design enables the goal-

118


Define the problem

directed design approach Autodesk employs to compute and generate solution forms through a series of simulation models. Autodesk defines this process by stating: “Loaded with design requirements, the system then searches a procedurally synthesized

design space to evaluate a vast number of generated designs for satisfying the design requirements. The resulting design alternatives are then presented back to the user, along with the performance data of each solution, in the context of the entire design solution space� (Project Dreamcatcher, 2014).

Analysis

119

Visualize the problem

Evaluate solution

Verify the problem

Explore best solution

Simulate

Develop Plans / Doc

Despite the pre-release status of project Dreamcatcher, examination and analysis of the project can be fairly thorough due to academic publications released by Autodesk relating to the project. Much of the study and analysis conducted on the project is based on these publications, as well as promotional material from the website. Further development of this case study will be undertaken if the authors can acquire a copy of the program for detailed examination, to provide further support for the speculative claims currently made in analysis. The approach taken with Dreamcatcher is an innovative development with seemingly widespread application in the design and mass customization of consumer goods. However, the design environment that they attempt


to place architects within seems at odds with the profession. Research undertaken by autodesk to examine the relevance of this tool to the architectural profession concluded that due to the desire of architects to use this tool in the early design stages to explore conceptual development the levels of abstraction employed in the interface gave architects too little control over the background processes (Bradner et al 2014). The limitations placed on design by operating within a parametric environment built with the logic of someone else can only result in a struggle between designer and computer. This importance of was best expressed in the conclusions of another thesis project: "by limiting the user to only having control over the inputs and ignoring the relationships between them, the user is at risk of creating a rubber stamp of designers mental process" (Welch 2014, p 13).

So to restate the conclusion from our analysis of flux in a more concise and complete way: the development of a functional system must come out of productive and managed interaction between designer, client, and machine - of which the designer maintains complete control. A processor can optimize and iterate, but a good designer adds magic through detail and imaginative interpretation.

Flux Introduction It is important to note that project Flux has four distinct aspects, three of which require detailed examination in the context of this case study. Initial project development in the Google X incubator revolved around a system termed Engineering Architecture which was created by Eli Attia. Currently available offerings focus on data management and

120


therefore receives the most focus. The explicitly stated methodology of this project is to design buildings from a parametric seed to be populated and rapidly executed by the thousands across a variety of sites. This methodology is supported by their stated goals of enabling the creation of large numbers of new buildings to accommodate a growing and increasingly urbanized population, with drastic reductions in building timelines, cost, and operational inefficiency. (Carlile 2014) Analysis

integration across multiple platforms for digital modeling and design. Development of the unreleased project Flux focuses on the development and adaptation of full building parametric definitions. Although all are included and help to inform the work contained in this thesis, the unreleased project is most relevant to our explorations and

121

Although an important project to include for study within this thesis, examination and analysis of Flux is difficult due to a lack of available sources - likely stemming from ongoing litigation regarding intellectual property disputes. Despite these difficulties, information the authors have obtained from available resources is essential for the theoretical development of this project. The involvement of Eli Attia with the project poses interesting questions about the original nature of proposed interactions between architect and computer.


Presentations demonstrating the functional aspects of the unreleased flux project bring up further questions about the difficulties involved with streamlining and automating aspects of the architectural process. Finally, examination of currently available Flux projects focused on data management and transfer reveal new and interesting ways in which multiple programs might be used in concert within conventional architectural processes. It is interesting that Flux seems to be attempting to frontload efforts into the development of detailed parametric definitions and an information management infrastructure which theoretically enables unlimited flexibility to change design decisions throughout the project. However, speculative analysis of the available material on this project suggests that this flexibility will be fairly limited. This is especially relevant considering the brittle nature of many full-building parametric models when attempting to respond to changes in the fundamental logic of the written scripts (Davis 2013, p 48). From this it is especially important to

note that the construction of a single parametric model that can be used from the start to finish of a project is difficult, if not practically impossible. The metaphor of parametric definition as a seed that grows into a unique form within each context is similar to the idea behind project Dreamcatcher, but the interaction shown in demonstration videos suggests an even greater level of abstraction (Carlile 2014). This limited method of direct interaction and the controversy surrounding the release of Attia from the project leads the authors to speculate that this project creates a very limited role for the involvement of the architectural profession. Although often small residential and light commercial structures do not require the involvement of an architecture or engineering professional, the large scale projects that Flux seems to focus on will require someone to assume liability for the design. With the reduced timeline and costs sought, will professionals be able to do their due diligence regarding compliance? Will the trades be able to quickly adapt to changing practices required for these new buildings?

122


Context Analysis These case studies provide an interesting opportunity to examine recent strides in the profession relating to generative design software. Conclusions that are especially important to highlight for further discussion include: •

•

successful application of generative exploration to product design and rapid prototyping by project Dreamcatcher development of an interconnected framework for information transfer being developed by Flux.

Findings regarding the successful application of systematic approaches within the conventional architecture process suggest that the focused and proper application of systematic and generative processes is fundamentally compatible with architectural design development. However, the greater goals of the studied software projects bring up many unanswered questions

123

regarding the role of architects within generative design frameworks. In the opinion of the authors the use of these programs will likely have far-reaching consequences for the architectural profession and development of the built environment.


A.2 Building Typology and Case Study The remainder of this section will address the comments and feedback made from the jury members during the Mid-Term review of Winter Quarter. In order to provide the most relevant lessons for future development of the thesis, the analysis will address the following themes: (1) thesis objectives; (2) comparative analysis procedures; (3) selection of parameters; and (4) presentation quality. The main objective of this thesis is to create a prototype interface that uses generative exploration to enable more intuitive and productive interaction between architect and computer during the schematic design stage. In particular, the formalization between process and goal is defined less by independent variables but rather a convergence of several topics within architecture. For this reason, much of the critical feedback received continually surrounds the recommendation to explicitly clarify the thesis’s objective.

In order to address the issue in a comprehensive manner, succeeding explanations will take root in the comparison between conventional and computational workflows among architects. Although further exploration will be required, the intention is to shift towards a correlation between case studies that are highly responsive to design conditions and the application of systematic design methodologies. In this context the development of process can begin to serve as a comprehensible form to illustrate the implantation of the thesis’s final prototype within architectural design process. Based on the issues and ideas brought to light through the studies of the architectural process (Lawson p 39), it can be demonstrated that architectural design parameters come to be important for the development of the problem description. Regardless, the notion of finding a design solution does not change with the introduction of computational design

124


in architecture. Whether parametric methods are implemented or not, the position of the architect’s experience and knowledge continue to remain critical towards accessing appropriate solutions. Lawson further expands on these similar ideas (Lawson p 39) by demonstrating the architectural design process doesn’t necessarily progress by addressing gradually smaller levels of detail. Instead, the design process will often involve moving from one scale to the next in order to inform the design by the most important aspects. Therefore, it should be recognized that the proposed tool may not serve as the basis for schematic design decisions in all cases. However, the questions addressed with the tool are fundamental functional issues and so regardless of the chosen parameters within the tool, they will suffice representative factors that architects should be aware of in every stage of design. “The more experienced final year architecture students consistently used a strategy of analysis through synthesis. They learned about the problem through attempts to create solutions rather than through deliberate and separate study of the problem itself.” (Lawson p 43).

125

Despite a wide variety of methodologies for approaching architectural design, case studies of the design process find that there is one common theme shared by the majority of early stage design processes. “In each case a group of sub-elements of the overall problem has been clustered together and elevated to the role of form generator” (Lawson 1997, p 192). This means that essentially a few key factors must be identified as most important (goal setting), these factors must be weighted against each other (prioritization) and then one or many solutions created (generation). Further development of the ideas comes from critique of the project (analysis and testing) and then initial criteria or designs are revisited for further development (refinement and filtration). Although the simplified diagrams of the design process don’t convey the totality of issues at play, examinations of real world situations invariably fall into the identified categories that can ultimately benefit the overall presentation and functionality of the finalized interface framework.


Building Typology The office typology was chosen for this project to provide specificity for the generally applicable design rules that this parametric system seeks to implement. Towards that goal, high-rise was specified and both illustrative and survey case studies pursued to identify parameters related to programming and schematic design. From this research several important elements emerged which will be instrumental in developing the general parametric systems already created including: •

• • • •

Standard structural systems (bay and floor to floor dimensions Stand ranges of sq ft per space Standard ratios of open to enclosed offices Standard services provisions per employee Standard ranges of USF and RSF per Employee

Further research into these end other related concerns such as construction type, vertical circulation

50+

13

12

4

3 1

LOW-RISE: 1 - 3 STORIES

MID-RISE: 4 - 12 STORIES

HIGH-RISE: 13 - 50+ STORIES

requirements, and the link between core and floorplate dimensions will guide further improvement and usability of the parametric scripts that are defined by these factors

Case Study Client: Seagram Liquor Company Architects: Mies Van Der Rohe and Philip Johnson Principle Consultants: Mechanical engineers; Jaros, Baum & Bolles, structural engineers; Severud-Elstad Krueger, electrical engineers; Clifton E. Smith, lighting consultant; Richard Kelly, landscape architects; Charles Middeleer & Karl Linn; general contractor; George A. Fuller Company

126


Floor Areas: 46,000 meter sq Height: 38 stories; 156.97 meters Site: 375 Park Avenue, New York, NY 10022, USA Schedule: Commissioned 1954, Completion 1958 Workspaces: From 1990-2013, over 467 interior renovation projects were undertaken, suggesting an accelerated rate of change on the interior compared with the relatively static shell. In 2005, following its acquisition by RFR Holdings, costs for interior renovations reached 20% of the original construction costs for the building. The floorplates are somewhat compromised in how they conform to typical “Class A” office space in New York City. The floor-to-floor height of 12 feet is considered low by today’s means; the distance between bays, about 30 feet, is narrow, and the floorplate is shallow - and the core takes up a good deal of that area, leaving about 18,000 ft2 of leasable space. This is also considered small for today’s standards, with a typical office tenant seeking 40,000 ft2 or more. Circulation: Movement throughout the building is three sets of six elevators

127

with two stair cases on each side. From the eleventh floor to the top floors, it reduces six elevators in the middle set and changes the use to restrooms instead Mechanical Systems: The conditioning system involves both dedicated perimeter units as well as general area conditioning units. The perimeter system only performs to buffer the thermal bridge throughout the envelope. Envelope/Materials: The 38-story structure combines a steel moment frame and a steel and reinforced concrete core for lateral stiffness. The concrete core shear walls extend up to the 17th floor, and diagonal core bracing (shear trusses) extends to the 29th floor. Zoning: It occupies only 40 percent of the allowable zoning envelope, and is 90 feet (27.5 meters) from the lot line. Recognizing the positive urban implication of such a plaza, New York City planning authorities, engaged in 1961 to revise the Zoning Resolution of 1916, view the Seagram project as a model for the ordinance, which ultimately resulted in the city’s FAR (floor area ratio) based code.


Structural System: Steel frame with curtain wall, bronze exterior “columns”

Connections Contain

STANDARD SPACES GUIDELINES GSA Workspace Utilization Benchmark Report 2012 State of Washington Space Allocation Standards Manual

Transition

Free Flow

Square feet per employee: Less than 75 75-100 100-125 125-150 150-175 175-200 200-225 More than 250

3% 4% 7% 11% 17% 17% 23% 19%

Supervisor Cubicle- 52-120 USF Technical Cubicle- 52-120 USF Support Staff Cubicle- 48-80 USF Clerical Cubicle- 64 USF

Space Types: Shared Workspaces / Assigned Workspaces Typical Worksapce Allocation: Executive Private Office- 105-400 USF Director Private Office- 75-300 USF Manager Cubicle- 60-200 USF

128


A.3 User Interface Study Existing Tools

Grasshopper

3ds Max

Junior Support Staff Cubicle- 48 USF Shared Workspace- 40 USF Average workstation 80 sq ft Support space (reception, conference, meeting, equipment, copy, etc) 40 sq ft Special areas (as needed) up to 19 sq ft Internal office circulation 56 sq ft

Fundamental Principles of User Interface

129

Revit

Rhinoceros

• •

Organize - provide the user with a clear and consistent conceptual structure Economize- do the most with the least amount of cues Communicate - match the presentation to the capabilities of the user


• Organize • Consistency • Screen Layout • Relationship • Navigability Economize • Simplicity • Clarity • Distinctiveness • Emphasis

CONTROL WINDOW

Four Main Tabs Programming Analysis Compliance Massing

X

X X X

X X

X PROGRAMMING

ANALYSIS

COMPLIANCE

MASSING

PATH Property

Path Distance

Command: PROGRAMMING

ANALYSIS

COMPLIANCE

MASSING

VIEW PORT

Visual Attractors Existing Path Flocking Cohesion Slope

UP

Clear

Highlight Color for Selection

Greyed Color for Non Selected

ANALYSIS

COMPLIANCE

MASSING

TITLE

PROGRAMMING

Save

2D / 3D Diagram Window

TITLE

User Input Window

Analysis Information Window

Sub Category Window

130


A.4 Process Chart

PROGRESS CHART Phase 1

Programming

Phase 2

Phase 3

Input

Input

Auto Build Context

Auto-pull Location Data

Output

Output

Output

Heating and Cooling Day Charts

Solar radiation on massing surface

Proximity of services

Sun Path Diagram 3D

Walking distances

Demographics

Wind Rose

Walking score

Pedetrian Circulation

Input Program Categories Program Spaces Area Requirements Adjacencies Cost - per squared foot based on program

Output 2D Box Diagram Volumetric Box Diagrams

Site Analysis

View Analysis Slope Intensity on Topograpy Shadow Based on Context Water Paths

Compliance

Input

Input

Identify Plot Lines

Construction Types

Identify Height Limit Identify Setbacks Identify Easements

Output

Output

Zoning Massing

Revised Massing

FAR

Anticipated Structural System

Coverage

Massing

Output Solar Adaptation Topograhy Adjustment More to come ....

131


132


A.5 Custom Components

Desired X Dimension Desired Y Dimension Primary Geometry

Filepath to EPW Link Latitude Longitude Index of Weather Station

133

Donwload EPW

Dependent Geometry

Variable Scaling

As part of the prototyping process, several custom components were created within Grasshopper to perform common operations for which there were no standard components. These custom components are useful in a variety of situations beyond those encountered in prototype development and would be beneficial for general release to the Grasshopper community

Final X Dimension Final Y Dimension Dependent Geopmetry Primary Geometry

Closest Weather Station Weather Station Links EPW File


File Path

Variable Scaling

Peak Lines

Format Text for Display

Lowest Edges

Value Output Mesh

LabelR Unit

esult

Time Zone Wind Speed

CurveB

REP

Wind Direction Dry Bulb Temperature

Mesh

Mesh to Surface

Other Topo Lines

Topo Mesh from Lines

Topo Line Height

Surface

134


A.6 Questionnaires

135


136


137


138


139


140


141


142


143


144


A.7 Presentation Fall 2015

145


146


A.8 Presentation Winter 2015

147


148


149


150


151


A.9 Presentation Spring 2016 Mid Term

152


153


154


155


A.10 PRESENTATION SPRING 2016 Final

156


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.