Does the sequence matter? Investigating the impact of the order of design decisions on the Life Cycle Performance
Master thesis
Author: Julia Tschetwertak
Bauhaus University Weimar Tutors: Vertr.-Prof. Dr.-Ing. Sven Schneider Prof. Dr. Reinhard Kรถnig
Submission date: 18.11.2016
Abstract The design as a process is not a new topic in architecture, yet some theories are widely unexplored, such as the multi-stage decision-making (MD) process. Rittel (1992) introduced this approach of solution finding as a design method that provides multiple design solutions for one design problem. This process is characterized by stages. In each stage, for each design solution multiple new solutions are created by adding components. After the last stage, the planner is left with numerous design solutions to choose from. The MD process enables design exploration which can lead to finding satisfying design solutions. Such a process cannot be conducted manually because of the high number of calculations needed for it. Therefore, a computational method is required to support this process. However, a computeraided method that applies the MD process does not yet exist. If such a tool is to be developed, fundamental and detailed theoretical knowledge about the MD process becomes necessary. The presented thesis focuses on the impact of sequence, also referred to as stage order, on design solutions in MD processes. Research questions are posed in order to guide the investigation process. Multiple MD processes with different sequences are conducted within a case study and the results visualized in corresponding MD trees. A generative design system was developed to undertake this study. As an example of architectural design problems, the Life Cycle Performance (LCP) of residential buildings is chosen. Life Cycle Assessment evaluates the environmental performance of buildings during their whole lifespan. Three main building components are included, the building volumes, the circulation cores, and the construction types. In the end, for each part of the case study all generated results are compared and possible findings discussed. These findings should serve as basic theoretical knowledge of MD processes.
I
Table of contents Abstract ....................................................................................................................... I Table of contents ....................................................................................................... 1 1 Introduction ............................................................................................................ 1 2 Background ............................................................................................................. 3 2.1 Design theory ................................................................................................... 3 2.2 Computational design methods ..................................................................... 17 2.3 Life Cycle Assessment of buildings ................................................................ 36 3 Case study ............................................................................................................. 42 3.1 Parametric model generation ........................................................................ 42 3.2 Building model analysis ................................................................................. 58 3.2.1 Solar analysis ........................................................................................... 59 3.2.2 Life Cycle Assessment and Energy analysis ............................................. 61 3.3 Multi-stage decision-making trees ................................................................ 65 3.4 Design sequences and optimization processes .............................................. 68 4 Results................................................................................................................... 79 4.1 Combining stages ........................................................................................... 80 4.2 Changing the stage order ............................................................................... 83 4.2.1 Maximizing the Life Cycle Performance .................................................. 83 4.2.2 Fixed Life Cycle Performance .................................................................. 85 4.3 Single-stage optimization .............................................................................. 87 5 Discussion ............................................................................................................. 89 5.1 Combining stages ........................................................................................... 89 5.2 Changing the stage order ............................................................................... 93 5.2.1 Maximizing the Life Cycle Performance .................................................. 93 5.2.2 Fixed Life Cycle Performance .................................................................. 95 5.3 Single-stage optimization .............................................................................. 96 6 Conclusions ......................................................................................................... 100 7 Outlook ............................................................................................................... 102 References .............................................................................................................. 103 Appendix ................................................................................................................ 109 A. MD trees (visualized with building envelopes) ............................................. 110 B. MD trees (visualized with solar radiation on site) ......................................... 116
II
Table of contents C. Grasshopper Algorithms ................................................................................ 121 Author’s Declaration .............................................................................................. 122
III
1 Introduction Several design maps have been developed in the past with the aim to structure the architectural design process. That should facilitate finding design solutions according to design problems. However, most of these maps contradict with the characteristics of design problems what can have a negative impact on the outcome. Rittel (1992) suggests a multi-stage decision-making (MD) process which delivers multiple design solutions for the planner to choose from. In order to support design exploration, a computer-aided application of the MD process is required. However, such a software for architectural design does not yet exist. If a computational tool based on this process is to be developed, fundamental as well as more detailed theoretical knowledge about the MD process becomes necessary. In architectural theory, the topic of sequence, also referred to as stage order, in MD processes is widely unexplored. The presented thesis addresses this subject. In particular, it focuses on the impact the stage order can have on design solutions. A case study is used to investigate different sequences of MD processes in order to address the following research questions which are the basis for this thesis: (1)
How does the fragmentation of a design problem into subproblems (stages) affect the resulting design solutions? Is combining certain stages to one stage beneficial for finding better performing solutions?
(2)
What impact does the stage order have on design solutions? Can swapping stages lead to better solutions?
(3)
How does the performance goal impact the design solutions? Is there a difference in solutions when setting a fixed value as the performance goal in comparison to a performance value maximization/ minimization approach?
(4)
Are design stages necessary at all, or can results best be achieved by optimizing the design problem as a whole?
For the purpose of research on these questions, a basic generative design system was developed. It involves a generation mechanism which according to Rittel (1992), is needed for the generation of variety (design variants) and an evaluation mechanism for the reduction of variety. Evolutionary Algorithms serve the
1
1 Introduction optimization processes within the MD processes. As an example of architectural design problems, this case study focuses on the Life Cycle Performance (LCP) of residential buildings. Life Cycle Assessment becomes more and more important in architectural practice as it evaluates the environmental performance of buildings during their whole lifespan. Considering LCP at the very early stages of the design process can have a significant impact on the overall LCP of the building. The design problem is kept basic by including only three components. These are the building volumes, the circulation cores, and the construction types. This simplified approach should allow a clearer overview of how the sequence of MD processes influences the design solutions. In regards to the research questions, multiple MD trees are created with each having a dissimilar sequence. An MD tree visualizes the generated design solutions according to the stages of how these were generated. That facilitates the comparison of solutions which is of high importance for this research.
2
2 Background
2 Background This chapter provides the theoretical background knowledge which is necessary for the case study conducted in this thesis. Design problems are defined and different design processes are discussed. Furthermore, an overview of computational design methods is included which involves information about the creation of generative design systems. Another topic is the Life Cycle Assessment of buildings which is used for evaluation purposes of the generated designs in the case study.
2.1 Design theory In the following chapter, design problems are defined and their characteristics explained. Different design processes are discussed in regards to their operation logic, aims, advantages, and disadvantages.
Characteristics of design problems Design can be defined as “a goal-directed problem solving activity” (Archer, 1964, p. 51). Matchett (1968, cited in Lawson, 2006, p. 32) understands it as “the optimum solution to the sum of the true needs of a particular set of circumstances.” These descriptions imply the existence of a problem. A problem arises when a discrepancy between a present and the desired situation exists, and concurrently, the necessary actions to achieve the desired state are partially unknown (Dörner, 1976). In the field of architecture for instance, a present condition could be an undeveloped site and a limited budget. The desired state for that scenario could be the construction of a residential building which is highly energy efficient inter alia by implementing passive design strategies (daylighting, natural ventilation, etc.). The means to achieve the desired condition are not entirely identified since no planning exists for that site yet. Therefore, a design has to be proposed as a solution for this problem. In distinction to well-defined problems with a goal and an end, design problems are often labeled as ill-defined (Rowe, 1987) or as “wicked problems” (Rittel & Webber, 1973, p. 155). These problems are characterized as follows:
3
2 Background (1) Design problems are not fully determined from the beginning but rather are progressively discovered in the course of the design process (Rittel & Webber, 1973). Lawson (2006, p. 56) comments that “they are often not apparent but must be found.” Especially when unexpected side effects occur during the process, the aims of the design problem can shift (Simon, 1973). Thus, framing the design problem goes hand in hand with the problem-solving (Rittel, 1992). (2) A variety of criteria can serve the evaluation of one design solution (performance criteria). However, the evaluation of some performance criteria cannot be conducted objectively because they are classified as unquantifiable (e.g. privacy, openness of a space) (Schneider, 2016). Furthermore, performance criteria can contradict each other which concludes in compromise solutions (Radford & Gero, 1985). Lawson (2006) names this kind of design problems as multi-dimensional. In buildings, the window is an excellent example of a multi-dimensional component. Its purpose is to let in daylight and enable natural ventilation. Simultaneously, as an interruption in the external wall it causes heat loss and noise transmission. Enlarging the window area to increase the amount of daylight in the building would lead to higher heat loss as well as noise transmission. In other words, in multi-dimensional problems one performance criterion cannot be improved without having a negative impact on at least one other criterion. (3) The number of variables (form variables) which can be varied to generate a solution is considerably high (Flemming, 1990). Goel and Pirolli (1992, p. 396) claim that “the kinds of knowledge that may enter into a design solution are practically limitless”. The end and the means for the solution are mostly unknown, inducing that no clearly defined outline for the potential solution space exists (Rittel, 1992). In consequence, various alternative solutions may deliver the desired result for one design problem (Rowe, 1987). The solution space contains all possible solutions for a design problem (Tank, 1992). (4) Performance criteria and form variables are in a Many-to-Many-Relation to each other (Flemming, 1990; Akin & Sen, 1996), meaning that one performance criterion can be influenced by various form variables and one form variable by multiple performance criteria (Schneider, 2016). “This makes it impossible to consider any (design or performance) variable in isolation.” (Flemming, 1990, p. 209) In the design of a residential building for example, the room sizes as performance criteria
4
2 Background are influenced by multiple form variables like the floorplan layout, the building form, etc. Therefore, Lawson (2006, p. 58) describes design problems as “highly interactive. Very rarely does any part of a designed thing serve only one purpose.” Furthermore, the representation of a design problem is key when it comes to problem-solving. To a great extent, it determines whether a solution can be found (Strube et al., 2000). The way a problem is expressed reflects the assumptions the planner made about the possible solution. Hereby, the knowledge implemented by the planner can aid to find solutions more efficient. Although, incorporating false or too restrictive assumptions can restrain the solution finding. In theory, it is possible to generate all potential solutions for one design problem, yet practically it would take an extreme amount of time. That implies the need of design strategies which may not find the one optimum solution, but multiple solutions which can be regarded as good enough from the planner’s point of view (Rittel & Webber, 1973). In that context, Simon (1996, p. 119) introduced the term “satisfying” for the solutions which are perceived as good enough. From these solutions, the designer can choose one design variant for further development. Through different search processes (design processes), which are discussed in the subsequent chapters of the presented thesis, satisfying design solutions can be found (Newell & Simon, 1972; Kalay, 1987; Eastman, 1969). Mostly, these processes are bounded by the planner’s limited cognitive capacities (Miller, 1956; Dörner, 1980) and by time constraints typically resulting from project deadlines. Lawson (2006, p. 55) claims that “there is no natural end to the design process” and the planner can never be completely sure when a design problem has been solved. Although he can stop pursuing the matter further when in his judgment, it is of no benefit for the problem-solving. Since there is no natural end to the design process, deciding how much time should be put into the search process is a tough task. “In design, rather like art, one of the skills is in knowing when to stop.” (Lawson, 2006, p.55) Nevertheless, in most cases the planner ends the search process because of reasons which are not resulting from the logic of the design problem but rather from the lack of time, money or endurance (Rittel, 1992). In consequence, the complexity of the design problem can get reduced and significant parts can remain disregarded. That can happen for instance through the prioritization of some performance criteria
5
2 Background and form variables (Katz, 1994) or the application of other (restrictive) design principles. Gero (1990, p. 28) provides a definition of the design process which summarizes the explained above: “Design activity can be now characterized as a goal-oriented, constrained, decision-making, exploration, and learning activity that operates within a context that depends on the designer’s perception of the context.” According to the description above, the four main characteristics plus the other properties of ill-defined design problems portray some of the difficulties that planners face during the search for design solutions. Whenever a problem-solving system (design system) needs to be applied to a design problem, these characteristics must be considered as the theoretical foundation. They are crucial for understanding the possibilities as well as difficulties that come with design problems.
Maps of the design process Lawson (2006) titles different structures of the design process (search process) as maps. “The common idea behind all these ´maps` of the design process is that it consists of a sequence of distinct and identifiable activities which occur in some predictable and identifiably logical order.” (Lawson, 2006, p. 33) It seems rational that the planner can follow certain steps in order to proceed from the initial stage of defining a design problem to the final solution. Some of the major design maps which appeared in literature over the past fifty years are discussed in the following. Markus (1969) and Maver (1970) developed a map of the architectural design process. A decision sequence and a design process are the two main components of that map (see Figure 1). With each step of the design process, the level of detail increases. For each of those steps, the planner needs to go through the decision sequence of analysis, synthesis, evaluation and decision (Lawson, 2006). First, a short definition of the parts of the decision sequence: Analysis is the understanding and framing of the design problem. It includes exploring relationships, recognizing patterns and structuring the problem. Furthermore, it sets the evaluation criteria for the design solutions as well as the performance goal. The
6
2 Background design performance expresses how a design solution behaves (performs) in relation to the evaluation criteria. In the synthesis part, the designer responses to the problem in form of design solutions. Evaluation contains the performance appraisal of the created solutions and compares the outcome to the performance goal defined in the analysis phase. Unlike the linear one-way connection from analysis to synthesis, the connection between synthesis and evaluation is a loop. That return loop enables the direct evaluation of design proposals, meaning if the generated solution does not achieve the desired performance, the synthesis phase needs to be reconsidered. This repetitive process continues until the planner decides to move forward to the decision phase (see possible reasons in chapter 2.2). In that last step, the designer chooses which design solution to retain for further development.
Figure 1 Markus/ Maver map of the design process (Lawson, 2006, p. 37)
However, that map neglects the possible necessity of multiple return loops. The creation of a design solution or its evaluation might suggest that more analysis is needed (Page, 1963). That leads to the conclusion, that the Markus/Maver map should include a loop from each part of the decision sequence to all the other parts as Figure 2 shows (Lawson, 2006). Loops are a crucial element of the design process in general. They support the design problem characteristic of framing the problem within the problem-solving process (see chapter 2.2(1)) as they allow to return to any preceding part of the decision sequence to incorporate knowledge gained in subsequent phases.
7
2 Background
Figure 2 Markus/ Maver map with additional loops (Lawson, 2006, p. 38)
The Markus/Maver map poses another problem: The level of detail increases with each stage of the design process, yet in reality that is less clear. According to the stage order implemented in the Markus/Maver map, the planner starts with outlining proposals, continues with scheme design and finishes with detail design (Lawson, 2006). That design sequence may seem logical from a superficial point of view. However, examples from architectural practice (e.g. Eva Jiricna starts problemsolving with detail design) prove that designers do not necessarily follow that particular stage order and still find reasonable design solutions. Venturi (Lawson quoting Robert Venturi, 2006, p.39) claims that “you don’t necessary go from the general to the particular, but rather often you do detailing at the beginning very much to inform.” Therefore, Lawson introduces his own version of the graphical representation of the design process (see Figure 3). His map has neither a firm decision sequence nor predefined design stages, instead, he shows analysis, synthesis and evaluation being linked in an iterative cycle (Lawson, 2006). Unfortunately, this map is not of great help to planners since it does not provide any guidance through the design process.
Figure 3 Lawson’s representation of the design process (Lawson, 2006, p. 40)
8
2 Background Gero (1990) proposes another approach for structuring the design process called the Function-Behavior-Structure Framework (FBS-Framework). Its basis is formed by three aspects describing a design solution: The first aspect is function but in order to adjust to the terminology used in the presented thesis, it is renamed to design problem. It involves the definition of the problem, which determines the requirements and goals the planner identifies for the design solution. Second is the aspect of behavior, which in this thesis is titled as performance. It consists of two parts, one sets evaluation criteria for the design solutions as well as a performance goal. The other part is the design performance which expresses how a design solution behaves (performs) in relation to the evaluation criteria. The third aspect is structure. In this thesis, it is named design variant. The design variant is a possible design solution determined by form variables and their relationships. Through the interconnection of these three aspects, Gero (1990) formed a structure (map) for the design process. Accordingly, Figure 4 shows eight design steps which the planner should follow to produce a design solution. These steps are formulation (1), synthesis (2), analysis (3), evaluation (4), documentation (5) and reformulation (6-8). Formulation (1) transforms the requirements expressed in the design problem into a goal performance. In the synthesis step (2), a design variant is produced which is intended to exhibit the goal performance. Analysis (3) derives the actual performance from the design variant. The evaluation (4) is conducted by comparing the performance of the design variant to the goal performance. If the resulting difference is zero or tolerable, the design variant is accepted as a design solution and can proceed to documentation (5). Documentation produces the description of the design solution, e.g. for construction. However, if the difference in performances exceeds the tolerance threshold, the design variant is not accepted. This case holds three possible ways to progress: First, the design variant can be reformulated (6) by applying changes to it or by producing another design variant. The second option is to reformulate (7) the performance goal and the third to reformulate (8) the design problem. Reformulation (6-8) is enabled by return loops. The reformulation of the performance goal (7) and the design problem (8) supports the design problem characteristic of the concurrence between framing the design problem and solving it (see chapter 2.1(1)). In regards to increasing the efficiency of the described design process, an automation of the steps 2-4 and 6 can be applied (Schneider, 2016). This is possible thanks to
9
2 Background the planner’s upfront formulation (1) of the design problem, the performance criteria, the performance goal and the form variables with their relationships, altogether representing the ill-defined part of the design problem. The reformation of the performance goal (7) and the design problem (8) add up to this part. It cannot be conducted by computational methods since a machine is not capable of making this kind of subjective decisions. Therefore, the computer is only asked to process well-defined parts of the design problem. The steps 2-4 and 6 can be regarded as well-defined parts because the decisions made by the planner during the other steps provide the framework for the automated solution generation. Automation allows continuous production and evaluation of design variants within an iterative cycle (loop) until a design solution is found. The design map introduced by Gero (1990) does propose a structure for the overall design process. However, it does not provide more specific information about the possible structure, e.g. division into sub-problems. That means, that it is up to the planner’s own judgment to either impose a structure on the design problem or to generate design solutions to the design problem as a whole.
Figure 4 FBS-Framework based on Gero & KannengieĂ&#x;er, 2004; Schneider, 2016, p. 26
Another example for design maps can be found in the work of Alexander (1966). He addresses the complexity of design problems. Therefore, he developed a mathematical system of decomposing design problems hierarchically into manageable chunks that could be processed by the human mind. Alexander
10
2 Background exemplifies his idea with the design of a village, where each feature required by the brief gets assigned an individual score in relation to all the other features. Through a mathematical cluster analysis, groups of sub-problems are defined. The design problems within these groups are closely related and have minimal connections to other sub-problems. The planner would then search for design solutions to each of the groups and lastly, combine the sub-solutions into one overall solution. In theory, that design approach may appear logical regarding the vast complexity of design problems. However, that design map has not been used in practice because of several reasons. Lawson (2004, p. 11) argues that Alexander’s approach failed because he “attempted to impose a structure on the problem–solution relationship in design which is simply not there.” Furthermore, the Alexander map contradicts with three of the main characteristics of design problems: First, the framing of the design problem is a continuous process that goes hand in hand with the problemsolving (see chapter 2.1(1)). Every time the planner develops sub-solutions for each of the groups, he gains additional knowledge about the problem. That implies, that the designer might experience the need of reconsidering the features or their assigned scores which were set in the beginning of the process. But since the map does not involve any return loop, changing the features is not possible. That limitation can influence the overall solution in a negative way. Another characteristic of design problems is their multi-dimensionality, where one performance criterion cannot be improved without having a negative impact on another criterion (see chapter 2.1(2)). By implementing the Alexander map where the design problem is divided into groups of sub-problems, some multi-dimensional relations between performance criteria can get lost. That can be caused by the different design goals each group has for the generation of sub-solutions. According to these goals, different performance criteria are used to evaluate the solution within each group. The performance criteria of the groups are isolated from each other, meaning they have no impact on other groups beyond their own one. Thus, important relations between the criteria might get neglected and possibly better compromise solutions could not be even considered. That can lead to design sub-solutions which cannot be assembled properly in the final step because of their possibly contradicting properties. Another characteristic of architectural design problems is them being highly interactive (see chapter 2.1(4)). Most components in a design solution serve multiple purposes. Dividing the design problem into sub-problems as proposed in
11
2 Background the Alexander map, implies that components can only serve multiple purposes within their own group. The lack of connections to other groups can limit some components significantly and lead to a worse overall solution. Regarding the problems that come with the Markus/Maver map and the Alexander map, it can be concluded that whenever a design map is to be constructed, the characteristics of ill-defined design problems must be considered. Otherwise, the quality of the design solutions might suffer from unnecessary limitations imposed by the map structure. Further theories about the architectural design process can be found in the work of Rittel (1992). He defines the structure of the design process as an alternating sequence of two elementary processes: the generation of variety and the reduction of variety. In order to find a design solution, at least one solution needs to be proposed by the planner for each design stage (generate variety). If the designer produces more than one solution for the same design problem in one stage, reasons have to be found to eliminate certain solutions (reduce variety). Ideally, the planner is then left with one design solution. Rittel (1992) states that various strategies can be used to approach a design problem: In order to adjust to the terminology used in the presented thesis, the processes have been renamed. (1) The linear process: In that strategy, the designer relies on his experience, makes use of individual heuristics and listens to his own intuition when it comes to design. That process is of linear nature since the designer provides only one single design solution for every design stage (see Figure 5a). (2) The scanning process: The planner attempts to solve the design problem with the first solution that he produces for each design stage (see Figure 5b). If that solution does not deliver the desired performance, the designer has to continuously create more solutions until he finds one design solution that meets the performance goal. Then, he can move on to the next design stage where he continues developing that solution. Therefore, he again creates design solutions until he finds one that features the desired performance and so on for the next stages. However, that strategy can only be applied if the planner is familiar with the topic of the design problem. Hence, he uses knowledge based on previous work and his own experience to generate design solutions.
12
2 Background (3) The simplified multi-stage decision-making process: After defining and structuring the initial design problem according to the design stages, the planner generates multiple design solutions (generate variety) (see Figure 5c). An evaluation filter which involves the performance criteria and the goal performance serves the reduction of variety. Ideally, in each design stage only one design solution passes through the evaluation filter being the best in relation to the other alternative solutions. In the next design stage, the planner develops new alternative solutions from the best solution of the previous stage. The planner repeats that process accordingly to the number of predefined design stages. During the processes of generating and reducing variety the following scenarios may occur: - The planner cannot think of alternative solutions and returns to the initial stage of defining the design problem: At that point, he can reformulate the problem and try to find solutions based on the newly altered design task. - Multiple alternative solutions pass through the evaluation filter: According to the filter, all these alternatives feature a similar performance. From there, the planner can decide to either randomly choose one solution or to increasingly tighten the evaluation filter by implementing stricter performance criteria. The latter method can help to reduce alternative solutions down to the point where the designer is left with only one solution which counts in comparison to the other generated solutions as the best. - None of the alternative solutions passes through the evaluation filter. In that case, the planner has three options to proceed. First, he can return to the initial stage of defining the design problem, reframe it and start over with the solution generation. Second, he can generate more alternative solutions in order to find one that manages to pass through the filter. His third option is to weaken the evaluation filter or to create another filter which allows at least one design solution to pass through it. These scenarios show which difficulties the planner may encounter with the application of the simplified multi-stage decision-making process and how he can overcome these to find a design solution according to the design problem. However, the designer can never be sure whether an alternative solution eliminated in a prior design stage could have ended up being the best solution after passing through more
13
2 Background stages. Furthermore, in order to find an optimum solution for a design problem, it is of benefit to search through a vast number of design variants (Rittel, 1992). The fourth strategy to approach a design problem is called the ´multi-stage decisionmaking process’ and it embodies these two remarks. In that context, the term optimum design solution means that the difference between the performance of the design solution and the goal performance is either zero or tolerable. (4) The multi-stage decision-making (MD) process: In the initial stage of that strategy the design problem is framed and structured according to the design stages. Rittel (1992) does not define a specific order or structure for the design stages in contrast to the Markus/Maver map where the level of detail increases with each design stage. Referring to Rittel (1992), defining the design stages is already a part of the design solution since that cannot be done objectively. Therefore, it is the planner’s obligation to determine the structure and the order of the design stages. The second stage contains multiple alternative design solutions (see Figure 5d). From there, each of these solutions is developed further by producing various alternative solutions in the next stage. The number of repetitions of that process depends on the number of design stages defined in the beginning. Since the MD process includes generation and reduction of variety, the scenarios as explained for the simplified MD process may occur, and the planner can proceed accordingly. After the last stage, the designer is left with a large number of design solutions. Regarding the design solutions in comparison to each other and how they evolve during the stages can improve the designer’s understanding of the design problem. Furthermore, he might gain knowledge about the influence certain design components can have on the design performance or on each other due to interconnections. All generated design solutions in the different design stages are diagramed in an MD tree. In the end, the planner has the option to choose the design solution with the best overall performance or, if multiple solutions with a similar performance exist, to decide by own preference. However, if none of the solutions achieve the desired performance, the planner can return to the initial stage and reframe the design problem by incorporating the knowledge gained from the failed MD process. Moreover, in the MD process he can use return loops to implement changes to any design stage. Nevertheless, the planner must keep in mind that the MD process is of evolving nature, meaning that changes applied to
14
2 Background one stage have consequences to consecutive stages. The probability of the need for return loops decreases with an increasing number of generated alternative solutions.
Figure 5 Linear process (a), scanning process (b), simplified multi-stage decision-making process (c) and multi-stage decision-making (MD) process (d) based on Rittel (1992, pp. 78-81); Hollberg (2016, p. 60)
One major benefit of the MD process lies in the variety created by the number of design solutions. That facilitates a more efficient exploration of the solution possibilities in comparison to the strategies discussed above. Additionally, it reduces the risk of missing out good solutions because no produced alternative design solution is dropped in any of the stages. In other words, the MD process holds the potential of discovering design variants which have comparatively worse performance in the early stages, although in combination with aspects from the following stages, their performance can improve significantly. Despite the numerous advantages the MD method offers, it is currently impractical for regular use in practice due to the limited amount of resources and time that designers have for problem-solving. Especially if conducted manually, the MD process can get enormously tedious. The reason for that is the growth of the number of design variants in every stage caused by multiplication. A computational tool for the complete MD process has not been developed yet. Similar to the Markus/Maver and the Alexander map, the MD approach presented by Rittel contradicts with two of the four main characteristics of ill-defined problems. As design problems are multi-dimensional and highly interactive (see 2.1.), structuring the design problem into design stages is more complicated than it
15
2 Background may seem in the first place. According to his own subjective understanding of the design problem, the planner has to define which components to design and how to evaluate them in every single stage. As he works only on one stage at a time, multidimensional relations between performance criteria can get lost if they occur in different stages and cannot influence each other because of that. This can prevent the creation of compromise solutions and lead to worse design performances. Similarly, the structure and order of design stages can impact the Many-To-ManyRelation between form variables and performance criteria in a negative way. These relationships make it difficult to find appropriate design stages. Although the MD process poses challenges to the planner, the benefits that come with this design system make it reasonable to invest further scientific research into that topic. In summary, numerous design maps have been developed in the course of time. Most of them pose a structure on the design process to help the planner finding design solutions according to a performance goal. Thereby, conflicts with the characteristics of design problems occur. That can result in limitations to the produced design solutions and impact their performance negatively. The MD process by Rittel (1992) holds the most potential to discover optimal design solutions. If conducted with computational support, the MD process can produce multiple alternative design solutions in a reasonable time frame. That can be of great benefit to planners in order to explore possible design solutions. However, a computer-aided application of the MD process does not exist at this point in time. Consequently, no scientific research on the impact of the stage order on design solutions in the MD process has been undertaken yet. The presented thesis addresses this topic.
16
2 Background
2.2 Computational design methods This chapter provides the theoretical knowledge for generative design systems. Advantages and disadvantages of computational design methods are mentioned. The three main components of generative design systems are defined. One of these is the optimization process. A few examples of studies which conducted optimization processes are included. Furthermore, parametric modeling is introduced. Lastly, the criteria a generative design system needs to fulfill are summarized.
Automation of the design process As discussed in the previous chapter, design can be perceived as a problem-solving process which aims to find design solutions according to a design problem. Mitchell (1975, p. 127) states more precisely: “problem solving can be characterized as a process of searching through alternative states of the representation [design variants] in order to discover a state that meets certain specified criteria”. Computational methods can support this process by generating and evaluating a vast number of design variants in a short amount of time. Since algorithms do not privilege certain solutions because of subjective reasons, they can be regarded as unprejudiced. That can be useful for finding unusual design solutions of which the planner may not have thought of in the first place. An algorithm is “a finite list of well-defined instructions” (McQuain, 2011, p. 2) for e.g. generating and evaluating design solutions.
Generation of design solutions In order to enable the application of a computational system for solution generation, the design problem needs to be translated into a form which can be processed by the system (Beckstein, 2000). In particular, form variables and rules for the design variant generation plus the evaluation criteria with the performance goal have to be established using a programming language and algorithms. The transformation of an object or a process into another representation form is called modeling and the result is a model. A model does not contain all the attributes of the original object
17
2 Background or process, only the ones which are relevant in regard to the problem (Stachowaik, 1973). The representation of a design problem determines to a high extent whether a solution can be found (Strube et al., 2000). Moreover, “representation in a particular way can give the designer special insights into a problem, and significantly aid the process of solution generation” (Mitchell, 1977, p. 38). Therefore, it is very important for the whole problem-solving process and needs to be considered carefully. Generally, two kinds of models can be utilized in algorithmic design processes: One model for the search process and a second model for the solution generation. The first model describes the strategy for finding a solution (algorithms for problemsolving like search algorithms, evolutionary algorithms, etc.). Knowledge about the characteristics of design problems is crucial for the selection of a suitable strategy (see chapter 2.1.). The other model represents form variables and rules for design variant generation plus criteria and methods for performance evaluation (Schneider, 2016). “A system’s design representation formalism and its set of operators for constructing designs define this space [of possible solutions]; they define the aspects of the final product that are included in the design, and the level of abstraction at which they are described.” (Eckert et al., 1999) Both models can be used in conjunction in order to facilitate efficient problem-solving. Primarily, the development of models for the design process is subject to computer science. In contrast, the definition of models for solution generation should be preferably conducted by the respective specialist field, which in this thesis is architecture. This knowledge of the discipline contributes to an adequate description of potential design variants (Schneider, 2016). It helps identifying form variables and performance criteria as well as their relationships to each other. The solution generation model can be structured into two parts: a generation mechanism (GM) and an evaluation mechanism (EM). The GM, also called generative system in literature, can be generally defined as a mechanism that contains a finite number of elements and rules for their interconnection in order to generate objects (Mitchell, 1977). Because it is a mechanism, the system itself can generate design variants according to prescribed rules what qualifies it for automation (Radford & Gero, 1988). The GM sets the algorithmic framework for the variant generation by implementing rules for
18
2 Background connections and relationships between form variables. Furthermore, it assigns numeric ranges for form variables. Within these ranges, values are selected to generate different design variants. This workflow implies that the planner who sets up the framework needs to think ahead of what kinds of design variants can and which ones cannot be generated by the GM. Building a GM is therefore already a part of the design solution. It is based on the planner’s subjective perception of the problem. Rittel (1992) illustrates the creation of a GM as a story that one tells about a design problem. A story can be told in numerous ways. Similarly, a GM can be composed differently. Consequently, if a design solution cannot be found, one possible strategy to overcome that difficulty could be to reconsider the form variables, their numeric ranges and/or their relationships, i.e. to tell the story differently. Accordingly, Gero (1990, p. 28) highlights that “designing involves exploration, exploring what variables might be appropriate.” Further, Mitchell (1977, p. 38) concludes: “since the system [GM] can exist in many different states, we can also say that construction of the system implicitly establishes a set of potential solutions.” Thus, design solutions can be found only amongst design variants which the GM is able to generate. Other design solutions which may exist for a design problem but cannot be generated by the GM are not considered in the problem-solving process. That poses the risk of missing out solutions with possibly even better performances. Therefore, it is important to keep in mind that the less restrictive the GM is formulated, the larger is the amount of possible design variants and the smaller the risk of missing out good solutions. By means of an EM which is the second part of a solution generation model produced design variants can be assessed. An EM contains evaluation criteria and a performance goal for design solutions. Both have to be described numerically to be processed by a computational system. The knowledge used for defining an EM is called requirement knowledge and the one for creating a GM is named generative knowledge (Tank, 1992). These two kinds of knowledge are derived by the planner from the design problem.
Two strategies which include the GM and the EM can be applied for finding design solutions: The first strategy is to formulate a GM which produces design solutions only. Since all of them have the goal performance, the EM becomes useless. For the second strategy, numerous design variants are generated (GM) and evaluated (EM).
19
2 Background The ones achieving the goal performance are considered as design solutions. The advantages and disadvantages of both strategies are discussed in the following paragraph.
Variant space, solution space and potential solution space According to Tank (1992), the distinction between a variant space, a solution space, and a potential solution space is important for demonstrating the influence of requirement knowledge and generative knowledge on the quantity of design solutions. The variant space outlines all design variants which can be produced by a GM. It is determined by the generative knowledge which is applied to build the GM. The potential solution space on the other hand, represents all possible design solutions to the design problem which achieve the performance goal. It is defined by requirement knowledge which is derived by the planner from the design problem. However, it is possible that no solution to all applied requirements exists i.e. the potential solution space contains zero solutions. That can be the case when requirements contradict each other. In the presented thesis, the potential solution space holds compromise solutions for that scenario (Schneider, 2016). When both, variant space and potential solution space overlap, a solution space is created. It contains all design solutions that the variant space and the potential solution space have in common. That is the complete set of solutions which can be found by searching through all design variants that can be created by the GM. Hence, the solution space is limited to a finite number of solutions (Breuker & Van de Velde, 1994). It contains only solutions that reach the performance goal. In the following, different scenarios for the relationships between the three spaces are described (Figure 6). First, the solution space can be congruent to the potential solution space. Therefore, the GM needs to be able to generate all solutions which are represented in the potential solution space (case b). In that case, the variant space is bigger than the potential solution space and the identification of solutions is conducted by the EM. Complete overlapping of the solution space and the potential solution space can also be achieved if the GM generates design solutions only (case c). That means all three spaces contain the same design solutions and the EM becomes useless. The second scenario is when the spaces partly overlap. In one
20
2 Background case (a) the partial overlap between the variant space and the potential solution space forms the solution space. That occurs when the GM generates numerous design variants but not all of them achieve the goal performance. Furthermore, the GM is not able to create all potential design solutions which are included in the potential solution space. Another case (d) is when the GM produces design solutions only, yet these are not all of the existing solutions in the potential solution space. Since only design solutions with the goal performance are created, the EM is not needed in this case. The third scenario describes a variant space and potential solution space which are not overlapping at all (case e). That indicates, that none of the design variants achieve the goal performance, i.e. the GM is not able to generate design solutions according to the requirements.
Figure 6 Variant space, solution space and potential solution space based on Mitchell, 1977; Schneider, 2016, p. 31
The cases (b), (c) and (d) can occur when the design problem is very simple. Developing a GM which generates all potential solutions (b) can only be achieved if the number of form variables used in the GM is very low. Case (c) presupposes that design solutions can be derived from the requirements of a design problem by following a logical sequence of steps. However, building a GM that works like that is impossible due to the characteristics of ill-defined design problems.
21
2 Background Case (e) can be avoided in two ways. First, the requirements (EM) can be changed to move the potential solution space towards the variant space. Second, the GM can be altered to move the variant space. Since the reasons for the failing of the design process are not clear, the second way should be preferred (Schneider, 2016). In order to find as many solutions as possible for a design problem, both, the variant space and the potential solution space, need to be big. The bigger a variant space is, the higher is the chance that it holds at least one design solution. Mitchell (1990) points out, that the intelligence of a solution generation system lies either in the GM or in the EM. He illustrates both strategies as following: “you can get acceptable results by combining smart designers with dumb critics, or by teaming smart critics with dumb but energetic designers� (Mitchell, 1990, p. 180). The latter strategy has the potential of finding creative solutions. At the same time, it holds the risk of an enormously large number of possible design variants. Searching through all of them cannot be conducted within a reasonable time frame. Heuristics can help to reduce that problem.
Heuristics Methods for finding solutions with limited knowledge within a reasonable time frame are called heuristics (Strube et al., 2000). Applying them to the design process can significantly increase the time efficiency of the search. Two types of heuristics are discussed in the following: the exclusion procedure and the heuristic search. In the context of this thesis, the exclusion procedure is a method for finding design solutions through the limitation of the variant space. That can be conducted by applying generation rules within the GM. These rules result from generative knowledge and aim to exclude design variants which do not meet the requirements of the design problem. They are determined by subjective decisions of the planner and reflect his expectations of the possible solution space (Rittel, 1992). Generation rules can define relationships between form variables, numeric ranges of form variables and set further limitations. One kind of these limitations is explained by Rittel (1992) who named them constraints. They help to eliminate solutions which are impossible to build in practice. Logical constraints (A and not A cannot be the case at the same time), physical constraints (laws of physics), technical constraints
22
2 Background (state of the art in technology) are a few examples of the numerous types of existing constraints. Generation rules can be for instance: The building needs to keep a distance of minimum 3 m from all site boundaries, has a ceiling height of 2,7 m and contains 1 to 2 circulation cores. A second circulation core is created when the building size exceeds 20 m x 18 m. Another possibility for limiting the variant space is to change the concept for the GM (Schneider, 2016). For example, the circulation core can be generated as a rectangle instead of creating a circulation core in the floorplan by placing four lines according to rules like “the distance between the first two parallel lines should be 4 to 5 m”. That would reduce the number of form variables as well as the variant space. Generation rules can be useful for finding design solutions within short periods of time. Nevertheless, wrong or too restrictive rules can hinder the search process and minimize the solution space. The creation of a GM should not be restricted by too many assumptions about the possible design solutions because it is, in particular, the lack of knowledge what makes a design problem a problem (Rechenberg, 1994). That means, the less restrictive the GM is formulated, the higher is the chance of finding creative solutions. Another approach for time efficient solution finding is the heuristic search. Especially, if the variant space is enormously large, searching through all variants is not possible in practice. Heuristics can aid to control the search process. They are able to neglect search paths with a low chance of leading to solutions and to choose paths with a high chance. This method can be exemplified with a game called ´blind man’s bluff`. The blindfolded person is searching for the target but does not know where it is located. The player receives hints whether he comes closer to the target or moves further away (Schneider, 2016). Similarly, heuristic search requires a measure of the distance between the actual performance of the variant to the goal performance. In general, two strategies to conduct heuristic search processes can be distinguished: the planner can either vary parameters in a parametric model manually and improve design variants iteratively, or apply computational methods (Hollberg, 2016). However, the first strategy is less objective since the planner includes mostly his own
23
2 Background assumptions and experiences about the design solution in the search. On the one hand, his personal knowledge is beneficial to the time efficiency of this manual process because he produces fewer variants which are very far from the design solution. On the other hand, chances are very high that he might miss out creative solutions which he would not think of on his own. Moreover, the part of the variant space which he explores is very small. This increases the risk of not finding any solution that meets the performance goal. In contrast, computational methods employ GMs in conjunction with computational solvers for automated generation of design variants. That facilitates a more comprehensive exploration of the variant space and increases the chance of finding creative design solutions. There are numerous heuristic search strategies, such as Hill-Climbing, Simulated Annealing, Tabu Search or Evolutionary Algorithms (Beckstein, 2000; Luger, 2001). The methods most frequently used to solve design problems are based on the concept of biological evolution. Since they do not depend on steady or unimodal variant spaces, these methods have proved to be very flexible and efficient search algorithms (Goldberg, 1989). In this context, steady means that performance values continuously change within a variant space. In other words, variants which can be regarded as similar, should also have similar performance values. Unimodal means that exactly one best solution exists and the more similar the variants are to this solution, the better is their performance. In building design, variant spaces cannot be steady or unimodal because even small alterations to the building geometry can change the performance values significantly. Furthermore, two completely different design variants can have similar performances (Schneider, 2016). The two main methods based on the concept of biological evolution are Genetic Algorithms (Holland, 1973) and Evolution Strategy (Rechenberg, 1994). Both methods differ in their way of implementation, but not in their basic concepts. Therefore, in the presented thesis they are subsumed to Evolutionary Algorithms (EA). EA are also referred to as optimization strategies. They aim to find optimal solutions which are design variants that meet the performance goal. Accordingly, Radford & Gero (1988, p. 20) comment, that “in optimization, the computer is used to prescribe a set of decisions in order to achieve a specified goal as closely as possible.� In the following, the presented thesis focuses on EA because these are used for optimization in the conducted case study (see chapter 3).
24
2 Background EA mimic biological evolution, namely, the process of natural selection and the ´survival of the fittest` principle. As they require only little knowledge about the problem, they are suited for complex problems. The main components of EA are individual, population, fitness, generation, generation mechanism (GM), evaluation mechanism (EM), variation mechanisms such as recombination and mutation plus the selection mechanism (Pohlheim, 2000). As explained previously, a GM generates variants according to a set of rules what defines a variant space. An individual describes a variant of the variant space. A population is a set of individuals. The evaluation of individuals is conducted by an EM and the result for each individual is called fitness. It is a measure of the distance to the goal performance (0 = optimal solution) (Schneider, 2016). The basic sequence of an optimization process by means of EA according to Bäck (2000) can be described as follows: (1) The optimization process starts by generating an initial population which contains a certain number of individuals randomly produced by the GM (see Figure 7). These individuals form the first generation. (2) The individuals are recombined by swapping components with another. (3) Next, a mutation mechanism applies smaller-scale changes to the individuals. (4) The resulting individuals are evaluated by the EM and assigned a fitness value. (5) Individuals with a poorer fitness are eliminated by the selection mechanism. This process is repeated from step (2) until an optimum solution is found or a termination condition is reached, such as a maximum number of generations or a time limit.
Figure 7 Basic sequence of an EA optimization process (translated from Pohlheim, 2000, p. 9)
25
2 Background EA are able to produce different solutions with each iteration of the optimization. However, if they keep delivering similar solutions with ongoing iterations, the planner needs to make assumptions about the possible reasons for that scenario. One explanation may be that for this particular problem only similar solutions exist which is quite unlikely for design problems. Another reason can be a too restrictive formulated GM. In this case, the designer can apply alterations to it and restart the optimization. A third option is to change the parameters of the optimization process, such as the recombination rate, the mutation rate, the number of individuals in the population, etc. Most architectural problems are multi-dimensional optimization problems (Coello & Christiansen, 1999). Their numerous performance criteria can influence each other, making a separate optimization impractical. Optimization for more than one performance criterion can be conducted using different approaches. The simplest way is to create a sum of all criteria if these are not contradicting each other. Beforehand, the performance value of each criterion must be scaled to the same numeric range (e.g. 0 to 1) and weighted according to their significance to the design problem. The result is one fitness value which can be set up to either aim reaching a specific fitness goal (performance goal), or to be maximized or minimized depending on the design problem. However, contradicting criteria need to be multiplied by -1 to express the contradiction. That reflects the characteristic of multi-dimensionality of design problems where one criterion cannot be improved without having a negative impact on at least one other criterion. Examples of this kind of fitness functions are demonstrated in the case study (see chapter 3). Another possibility for treating design problems with contradicting criteria is to apply evolutionary multi-criterial optimizers. When the solutions of bi-dimensional optimizations are plotted in a graph, a curve called Pareto-front can be observed. This approach can also be employed for multiple criteria. Radford & Gero (1988, p. 11) state accordingly: “A feasible solution to a multi-criterial problem is Paretooptimal if no other feasible solution exists that will yield an improvement in one criterion without causing a degradation in at least one other criterion.� More information on that approach can be found in Deb (2001) and Coello & Christiansen (1999).
26
2 Background When employing heuristic search processes, such as EA, the optimization stops when reaching a termination condition, e.g. a runtime limit. However, the planner cannot be sure whether an optimum solution has been found. He can only trust that the EA had enough time for finding a solution close to one optimum (goal performance). The application of EA cannot guarantee to find an optimal solution, yet it might find one or multiple solutions which can be regarded as good enough (satisfying) from the planner’s point of view. Currently, EA are the only computational method to solve ill-defined design problems (Schneider, 2016). Furthermore, Radford & Gero (1988, p. 25f) express: “The disadvantage of optimization lies in the inherent difficulty of formulating meaningful quantifiable objectives [performance criteria] in a discipline characterized by multiple and illdefined objectives.” It is the planner’s obligation to define GMs and evaluation criteria that represent the design problem in an appropriate way in order to find optimal solutions. Numerous studies exist which demonstrate the efficiency of EA. Nonetheless, they tend to optimize only parts of the building design, such as fenestration. In the following, two examples for optimization processes in architecture are shortly presented.
´Floor shape optimization for green building design` Wang, Rivard & Zmeureanu (2006) Wang, Rivard & Zmeureanu (2006) conducted a case study where they optimized footprint and fenestration of an office building in order to improve the Life Cycle Performance (LCP) of the building. The LCP describes the environmental impact of a building during its lifespan (see chapter 2.3). The building footprint is represented by a multi-sided polygon where the angles and side lengths of the shape can be varied to generate different configurations while the building area remains the same. A building’s shape influences the heat loss, i.e. the bigger the envelope area, the bigger the heat loss. Furthermore, the glazing area can be manipulated separately for each side of the footprint. This facilitates the optimization of daylight availability in the building according to different orientation. As explained previously (see chapter 2.2), windows are multi-dimensional building components which enable daylight accessibility and cause heat loss at the same time. Both, fenestration and the building shape are of major importance to the energy demand of buildings which
27
2 Background is one main aspect included in the LCP. A multi-criterial optimization using Genetic Algorithms is conducted and a few solutions from the Pareto-front are shown in Figure 8. Solution (ID=1) demonstrates one extreme where the building shape is kept compact and the window area very small causing minimal heat loss. In contrast, solution (ID=12) is not as compact and has a large window area on the south faรงade which results in high daylight availability in the interior but also in increased heat loss.
Figure 8 Building footprints of selected solutions from the Pareto-front (Wang et al., 2006, p. 375)
28
2 Background
´Multi-objective Optimization of Building Geometry for Energy Consumption and View Quality` Conti, Shepherd & Richens (2015) Another example for multi-criterial optimization of building geometry and fenestration is presented by Conti, Shepherd & Richens (2015). The aim is to minimize the energy consumption to cool an apartment building and to maximize the view quality visible through the windows by means of EA. This research focuses on improving both criteria purely by self-shading geometry. Since external shading devices can have a negative impact on the view, the effects of casting shadows on the windows below and tilting the façade about its horizontal axis are explored. In order to determine the view quality, a scoring system was developed. The GM describes a terraced building with multiple floors where the tilts of the façade on two sides as well as the number and size of the windows can be varied. Two cases are investigated: one with the desirable view to the north and the other to the south. Figure 9 shows the results for both cases. It can be observed that increasing the angle of solar incidence by tilting the façade has less of an effect than casting shadows on the windows. However, most impact on the optimization results has the window size. Displaying optimization results in a Pareto-front is especially useful for understanding trade-offs between conflicting optimization criteria and how that affects the design.
29
2 Background
Figure 9 Building designs for the desirable view to the north and to the south (Conti et al., 2015, p. 293)
Further examples for window optimization can be found in the work of Caldas & Norford (2002) and Wright et al. (2013). One example for optimization of building massing in relation to envelope area and daylight accessibility is the case study of Dillenburger et al. (2009). A study which combines massing and window optimization in relation to daylight analysis was conducted by Konis et al. (2016). In the following paragraph, the heuristic search strategy, the GM, and the EM are assembled in in order to support problem-solving.
30
2 Background Schematic workflow of a generative design system Figure 10 shows a scheme of a problem-solving process in relation to the planner. The three main components of this process (design system) are the generation mechanism (GM), the evaluation mechanism (EM) and the heuristic search strategy (e.g. Evolutionary Algorithms). The GM produces design variants according to generation rules, which are evaluated by the EM based on performance criteria. Decisions on how to produce design variants for finding a solution, and when a performance goal or a termination condition is reached, are made by the heuristic search strategy. Automation of that process allows directed search for design solutions (Mitchell, 1975). Such an approach is also called Performance-Based-Design (Oxman, 2009). In the presented thesis, the automated problem-solving process is named generative design system (GDS) in reference to Eckert et al. (1999). As soon as one solution is found, it is displayed and the planner can decide whether this solution is satisfying. If it is not satisfying, he can either let the design system search further, or he can reconsider the performance criteria and/or the generation rules. Lawson (2006, p. 298) states in this context: “design characteristically also involves making judgments between alternatives [alternative design solutions] that cannot be reduced to a common metric, and therefore require the subjective evaluation of the designer.� Ideally, the process of solution generation would work without any interruptions, which would enable the planner to evaluate them in real time. He then could see how the rules and criteria he defines, influence the design solutions and gain knowledge about the problem with ongoing iterations (Donath et al., 2012). Rittel (1992, p.82) illustrates this process as the creation of an ´image` of the design problem. The better the planner understands the problem, the clearer becomes the path to finding an optimal solution.
31
2 Background
Figure 10 Schematic workflow of a generative design system in relation to the planner based on Schneider (2016, p. 37)
Evaluation criteria and generation rules are developed from an abstract problem statement, which is the subjective interpretation of the design problem. However, the abstract problem statement is not the same as the initial problem statement, but rather an abstraction of it. That enables the translation of the design problem into a form which can be processed by the computational system (programming language, algorithms) (Schneider, 2016). The potential solution space contains all design solutions according to the abstract problem statement. Consequently, there must be a space which includes all possible solutions to the initial problem statement. In this thesis, it is named the complete solution space. This space is mostly bigger than the potential solution space because it is not restricted by the numeric form of computational systems. Although, ideally both spaces should have the same size. Therefore, Figure 6 can be extended by the complete solution space (Figure 11).
32
2 Background
Figure 11 Complete solution space, potential solution space, solution space and variant space based on Schneider (2016, p. 54)
Parametric Design In order to create a generative design system (GDS), a computational method is needed which can quickly produce design variants according to a set of generation rules (GM). Parametric design which became popular in the last decade is such a method. It is not a new idea but automation and software availability made it easier to pursue (Johnson et al., 1984). The basic idea of parametric design is to describe a geometry using parametric equations which are defined as a “set of equations that express a set of quantities as explicit functions of a number of independent variables, known as ´parameters`� (Stover & Weisstein, 2016). Especially, Visual Programming provides an interface where the designer is able to graphically interact with program components instead of typing lines of text code. Additionally, he can code his own components and include them into the interface. Grasshopper which runs with Rhinoceros 3D is a visual programming tool commonly used in the building industry (Konis et al., 2016). In this thesis, it is employed in the case study (see chapter 3). Grasshopper provides components which are virtually wired together to algorithms (parametric equations). The wires of components can be understood as the relationships between the form variables. Algorithms contain generation rules for
33
2 Background the variant production. In that way, geometry can be created and parametrically varied in real time. The same applies to non-geometric functions, such as material properties. Parameters enable the manipulation of algorithms through ´sliding` the number sliders. With every alteration of a parameter, the algorithm that it applies to is recomputed and a newly generated design variant is provided (Woodbury, 2010). That generation mechanism (GM) can be called a parametric model. It is defined by an algorithm and can be connected to an evaluation mechanism (EM) to calculate the performance of design variants. An EM can involve simulation-based environmental analysis tools, such as Ladybug, Archsim and DIVA (McNeel, 2016). Furthermore, the designer can create custom evaluation algorithms. Grasshopper also contains optimization components that use Evolutionary Algorithms (EA), e.g. Galapagos. In conventional architectural design with 2D CAD programs, applying changes to the design can be very time-consuming what may induce high costs. Parametric design allows applying changes to the design within a short time, e.g. by changing parameter values. The ability of adapting to changes can be called flexibility. That can be of great benefit to keep the costs for design changes very low, ideally, as close to zero as possible (Teresko, 1993). Another main advantage of parametric design is the possibility to easily generate numerous different design variants. Simon (1996) states that the logic of design is concerned with finding alternative solutions and making a rational choice between them. Further, Lawson (1994) reports that many planners express the need to generate and evaluate alternative design solutions. Parametric design can serve both needs. Once a parametric model is defined, the creation of design variants is quick and nearly effortless. In order to facilitate also time-efficient optimization, the methods utilized for variant evaluation need to be quick as well (Hollberg, 2016). That however, is not always the case with simulation-based environmental analysis tools since they tend to require more time for calculations than the parametric model for variant generation.
34
2 Background Variety, flexibility and relevance In order to support the design process, a GDS needs to fulfill the criteria of variety, flexibility, and relevance (Schneider, 2016). First, a GDS should be able to explore a vast variant space to increase the chance of finding design solutions according to the goal performance (Radford & Gero, 1985) and to discover solutions of which the planner might not have thought of by himself (Eckert et al., 1999). If a GDS keeps delivering only similar solutions for one design problem, although obviously alternative solutions exist, then it can be classified as not very creative (Schneider, 2016). Consequently, the more different solutions with a similar performance a GDS can produce, the more creative it is. For achieving a high level of creativity, a GDS needs to be able to generate a vast amount of design variants. However, the bigger a variant space is, the less likely it is to explore it in a reasonable time frame. Heuristics can help to minimize that problem by employing heuristic search strategies, such as EA, or by applying generation rules to the GM to eliminate design variants which are unsuitable. Latter strategy holds the risk of the rules being too restrictive and therefore, missing out good solutions. In this case, the variant space does not contain all the solutions from the potential solution space. It is also possible that the variant space and the potential solution space do not overlap at all. This means, no solution can be found in the variant space. Radford & Gero (1985) illustrate the latter case as an analogy to a man who has lost his keys and is searching for them at nighttime under a lantern’s cone of light. However, he is not searching for them at that place because he has lost them there, but because there is light. Second, a GDS should offer the planner numerous possibilities to formulate the design problem according to the knowledge he has about it. This ability makes a GDS flexible (Schneider, 2016). Parametric design enables the designer to define the design problem in numerous ways as long as it can be expressed numerically. A design problem can be described in different levels of abstraction. For instance, in a high level of abstraction, a building can be described as a low-energy house. In contrast, a building with an insulation thickness of 20 cm can be perceived as a low level of abstraction. During the design process, the planner’s understanding of the design problem can increase and he might reformulate some aspects to be more precise. Furthermore, different parts of a design problem can be described on
35
2 Background different scale levels. For example, the configuration of several buildings has a higher scale than one floor in a building. Third, a GDS needs to provide numerous possibilities to evaluate the generated design variants. It should assess how relevant a variant is to the problem-solving. The explained criteria of variety, flexibility and relevance are important for a GDS since it has to support the design process. The case study (see chapter 3) presented in this thesis incorporates this knowledge as well as the other information explained previously.
2.3 Life Cycle Assessment of buildings The case study uses Life Cycle Assessment (LCA) as an example for design variant performance evaluation. This chapter explains what LCA of buildings is, and why considering it in the early stages of a design process is important. Furthermore, two methods for operational energy demand calculation are presented and difficulties that architects face with the application of LCA are discussed.
The building sector has a significant impact on the environment, as it accounts for more than 40% of the world’s primary energy demand and one-third of greenhouse emissions (UNEP-SBCI, 2009). Approximately 50 % of the world’s processed raw materials are used for construction (Hegger et al., 2007). In order to decrease the energy demand of buildings, most industrial countries introduced regulations on energy efficiency. Over the last 40 years, this led to a successful reduction of the operational energy demand and of the resulting operational environmental impact of buildings (Hegger et al., 2012). Starting in 2021, the European Directive on Energy Performance of Buildings will require that all new buildings will be constructed as so-called Nearly Zero Energy Buildings with an operational energy demand close to zero (EU, 2010). Consequently, the embodied energy will gain even more importance. Embodied energy includes the energy used for production,
36
2 Background refurbishment and disposal of buildings as well as the environmental impact resulting from that (Hollberg, 2016). Life Cycle Assessment (LCA) takes both into consideration, the operational and the embodied energy over the building’s whole life cycle. It can be defined as a “compilation and evaluation of the inputs, outputs and the potential environmental impacts of a product system throughout its life cycle” (ISO 14040, 2009, p. 11). Resources, energy, pre-products, or auxiliary materials can be regarded as inputs. Typical outputs are waste, by-products, or emissions to the air, water, or earth (Hollberg, 2016). Especially in the scientific context, LCA is widely accepted and becoming more and more important for the evaluation of buildings (Weißenberger et al., 2014). In architectural practice, however, it is rarely applied because most building regulations require only the evaluation of the operational energy demand (Szalay & Zöld, 2007). That makes the evaluation of the embodied energy voluntary. Only a few building certification systems include an LCA. Nevertheless, LCA is currently the most suitable for utilization in architectural design, mainly because it is the only internationally standardized method for evaluating environmental sustainability (Klöpffer & Grahl, 2009). The European regulations for LCA for buildings (EN 15978:2012 and EN 15804:2012) divide a building’s life cycle into five main modules. These are production, construction, use, end of life, and benefits beyond the system boundaries. The use phase of buildings can range from 30 to over 100 years, what makes the operational energy demand calculation significant. There are two strategies for determining a building’s operational energy demand, either by dynamic building performance simulation (DBPS) or by quasi-steady state methods (QSSM). Both strategies require a boundary for the energy balance which corresponds to the thermal envelope of the building. In the heating demand calculation, the heat sinks and useable heat sources are balanced within the boundary and a defined time step. For the cooling demand, excess heat sources are balanced. Building services need to be able to provide the resulting heating and cooling demand. Although both, DBPS and QSSM, are calculated similarly, their main difference lies in the time steps in which the energy balance is established. DBPS usually uses time steps between 1 minute and 1 hour what enables detailed heating and cooling simulation. Consequently, the computation time is relatively long, ranging between 20 seconds and 5 minutes on a standard computer. In
37
2 Background contrast, QSSM is much quicker with a computation time of only 0.1 seconds to 5 seconds. The reasons for that are the monthly time steps and the consideration of the heat stored in the building via a global factor. This simplification implies that some aspects are neglected, and complex interactions cannot be represented. Due to its time-efficiency, the QMMS allows for the optimization of simple systems, such as residential buildings (Hollberg, 2016). According to van Dijk et al. (2006), the results from QSSM are accurate enough for the application to residential buildings in warm, moderate, and cold climates. When LCA is conducted in practice, usually three participants are involved in a cascaded workflow. The architect produces building plans of the building he designed and passes them on to the energy consultant who calculates the operational energy demand. Next, the LCA practitioner carries out the LCA based on the operational energy demand and the bill of quantities he received from the architect (see Figure 12). Since the energy consultant and the LCA practitioner focus solely on their individual work, the interrelation between the operational and embodied impact is lost. Sub-optimal solutions can be the result if trade-off effects are not considered.
Figure 12 Conventional process for LCA in architectural practice (Hollberg, 2016, p. 3)
One reason why LCA is barely used in practice is the manual and time-consuming input of the data which is needed for the calculations. In regard to the short deadlines that architects are facing, time-efficiency is a crucial factor when involving
38
2 Background LCA into the design process. Typically, the architect has to wait for a few days for the LCA results to arrive due to the cascaded workflow (Hollberg, 2016). In conclusion, only one, or very few design variants are assessed at a late stage of the design, primarily to verify decisions already made. However, the evaluation of the design through LCA is not sufficient on its own, if it does not improve the design (Wittstock et al., 2009). Moreover, the calculation results should be used as feedback about the design variant in order to support decision-making during the design process (Hopfe & Hensen, 2009). A measure of the resulting environmental performance of variants during their whole life cycle is the Life Cycle Performance (LCP). By means of the LCP, different design variants can be compared, which is vital for optimization processes. Due to the previously mentioned disadvantages of the cascaded workflow, early design decisions regarding energy efficiency are often based solely on reference projects and the designer’s own experience (Altavilla et al., 2004). Especially when the planner is not very familiar with local climate conditions, it is likely that the resulting design does not capture the potential of the climate. Additionally, adjacent buildings can create complex overshadowing conditions what in connection with the local climate often requires more than the architect’s intuition to find an optimal solution. Interactions between design components can become very complex which makes it difficult for the architect to take all of them into consideration. In those cases, the planner’s intuition and experience can lead to sub-optimal or even to counter-productive designs (Konis et al., 2016). Optimization can support the decision-making process where rapid design variant exploration is essential (Gerber & Lin, 2013). In general, optimization can best be achieved in early design stages (Steinmann, 1997) where decisions made have the biggest influence on the building’s operational energy demand (Hegger et al., 2007) and the resulting operational and embodied environmental impacts (Schneider, 2011). Approximately 20% of design decisions are made in the early design stages, namely, stages 1 and 2 (see Figure 13). These decisions account for estimated 80% of the total impact on the overall building energy performance (Bogenstätter, 2000) which highlights their importance. Furthermore, changes to the design induce the smallest costs in early design stages compared to consecutive stages (Paulson Jr., 1976).
39
2 Background
Figure 13 ´Paulson curve` (Paulson Jr., 1976, p. 588)
The dilemma with conducting LCA in the early stages is that the exact bill of quantities and product-specific information required for the calculations is only available after stage 4 where the technical details are defined. At that stage, the LCA results are less useful because the implementation of significant changes to the design would be too costly (Baitz et al., 2012). Consequently, the results are not used for design optimization (Hollberg, 2016). Nevertheless, due to the growing global concern for sustainability in the building industry, planners are increasingly interested in obtaining rapid and iterative performance feedback on decisions in the early design stages. In order to achieve better building performances, architects should go beyond the incremental improvement of mechanical systems, such as HVAC and electrical lighting, which mostly happens in the late stages of the design process. Making optimal use of environmental services provided by natural systems is a lot more effective, e.g. shaping the building and determining the window layout according to solar conditions on site. Therefore, computational methods are needed that enable planners to examine and optimize the application of passive environmental strategies (Konis et al., 2016). However, there is a lack of such methods in practice because most of the software are specialized in one or only a few aspects, e.g. daylight simulation. Usually, multiple programs are applied in one design process what is not very time efficient because of the time that goes into the
40
2 Background export and import of data. This fragmented approach hinders the exploration of connections and relationships between different aspects of buildings. Ideally, one software would provide the possibility to generate and optimize design variants in regards to their overall environmental impact in the early stages of the design process. Grasshopper, as a parametric program, allows the application of different add-ons for simulation purposes and offers components for optimization. Paired with an algorithm that conducts the LCA, this approach can support the decisionmaking process. Such a parametric method is developed in the case study.
41
3 Case study
3 Case study In this chapter, the outline of the case study as well as the investigated design sequences are introduced. The creation of the generative design system used for the case study is explained. This contains the generation and the evaluation mechanism in conjunction with the optimization process. Furthermore, the visualization method for the generated design solutions is described.
3.1 Parametric model generation The Design problem is framed in this section. Furthermore, the structure and the different components of the GM are addressed. Relationships between the components are described and generation rules are defined. This case study does not focus on the architectural design of the generated solutions but rather on the theoretical knowledge that can be derived from them according to the questions posed in the introduction. The design problem outline of this case study is as follows: On a rectangular site, a configuration of two to four residential buildings with an optimal LCP (see chapter 2.3) is to be designed. This configuration should provide a gross floor area of 2500 m². Circulation cores, preferably located in shaded areas of the buildings, are to be included. That design problem involves making optimal use of available daylight by taking into account the shading conditions on site, plus the overshadowing that buildings create by themselves. Furthermore, the shape, volume, and position of the buildings need to be considered in order to achieve best LCP results. A set of potential construction types is provided.
Site A GM can be created in different ways in regards to the same design problem. In this case study, the GM is a parametric model visually programmed using the software Grasshopper for Rhinoceros 3D. First, the site needs to be defined. The floor area ratio (FAR) which is the ratio of a building’s gross floor area to the site
42
3 Case study area, is set to 0.6. That is comparatively low referring to urban environments like city centers where the FAR can go far beyond 1,5. Typically, a FAR of 0.6 can be found in residential areas with landscape character (Heicke et al., 2007). However, in this case study, the decision of what FAR to choose is not driven by urban context but rather by the aims of the research. For generating a variety of different design variants, it is important that the buildings can be arranged as freely as possible on the site. Variety facilitates the identification of tendencies, such as building distances and locations. A FAR higher than 0.6 would lead to a more restricted positioning caused by a bigger gross floor area, or by a smaller site area. In conclusion, less variety of design variants would be generated. The rectangular site is 75.00 m long and 55.56 m wide which results in a site area of 4167 m². Applying the FAR of 0.6 to the site delivers a gross floor area of 2500 m² for all buildings. Every design variant (configuration of buildings) which is produced by the GM should have this gross floor area, e.g. it is a constant value. In order to incorporate possible site-specific regulations, a set-back of 3 m from all site boundaries is added. No building can be positioned beyond the required set-back lines. In practice, such regulations can serve the fire protection between buildings, or the esthetic appearance in the context of urban planning.
Surrounding buildings Next, the surrounding buildings are created. These frame the plot on the east, north and west side (see Figure 14). All of them are different in height, what induces dissimilar shading conditions on three sides of the site. To the plots of the surrounding buildings, the same set-back of 3 m is applied. Their dimensions match with the buildable area defined by the set-back to achieve quite even shading on each side. That should facilitate the analysis of the design solutions in regards to preferred and avoided shading conditions produced by surrounding buildings. For instance, if one side offers extensive shading from a tall building, the buildings on the plot might be placed at a higher distance to this side in order to avoid or at least minimize the shaded area. The opposite effect applies if no buildings or other obstacles like trees cause shading. In this case, the buildings might preferably be positioned near those areas. Such an area is the south side of the plot. At this side,
43
3 Case study a street is located. The shading conditions are also affected by the orientation of the plot. That is the reason why the surrounding buildings in the east and in the north have the same height but cast a different shadow on the site as the solar radiation analysis in Figure 14 illustrates.
Figure 14 Shading conditions on the site caused by surrounding buildings
Individual building areas The next step is the division of the gross floor area (also referred to as building area in this thesis) of 2500 m² for the complete building configuration into individual building areas. For the creation of different design variants by the optimization component later on, the division needs to be parametrically controlled by sliders. Therefore, it is important to determine the minimum and the maximum area a building can have. In this case study, the minimum apartment size is set to 66 m² and one floor has to accommodate at least two apartments. Additionally, circulation space is required which accounts for minimum 16 m². Every building can have two to four floors, what leads to a sum of 296 m². Areas which are not located on each floor, such as service room area, entrance area, etc., are taken into consideration with a minimum of 8 m². For a building with two stories, this results in a minimum building area of 304 m². The biggest building should accommodate 20 apartments, three circulation cores with 16 m² each and additional areas, such as service area, hallways, etc. All these areas add up to a maximum building area of 1400 m². It is
44
3 Case study important to mention that these numbers are rough estimations of the areas. Since the divisions of the building areas are controlled parametrically, the individual building areas vary between the lower threshold of 304 m² and the upper threshold of 1400 m². In order to facilitate the optimization process, the numeric range of the parameters needs to be scaled. When dealing with big areas like 2500 m², the level of detail in the numeric ranges of the parameters (sliders) should be adjusted. That means, a step size of 1, corresponding to 1 m², is too detailed because such a small change would barely affect the geometry. Furthermore, the number of possible areas that this step size would create is too high. That would be 1096 variants within the range of 304 m² to 1400 m². Regarding that to each of the four possible buildings applies the same range (4 x 1096), the variant space would be unnecessarily large what reduces the chance of finding design solutions in a reasonable amount of time. Therefore, the initial range is scaled down to a range of 0.00 to 1.00, which holds 100 steps. One step induces a change in area of approximately 11 m². The following example demonstrates this range scaling process: A parameter with the selected value of 0.35 corresponds to a building area of 687.6 m². When the gross floor area for each individual building is identified, as shown in the example above, all of these areas need to be added. If the sum is bigger/ smaller than 2500 m², another scaling process applies. Then the area of 2500 m² is divided by the sum. The resulting value is the new multiplication factor for the previously determined individual gross floor areas of the buildings. All gross floor areas are multiplied by this factor and the resulting new sum is 2500 m² (see Table 1).
Table 1 Example for a value scaling process for a configuration of 3 buildings Parameter value
Individual building area
0.12
435.52 m²
0.34
676.64 m²
0.90
1290.40 m²
Sum of the building areas
Division factor
Multiplication of the building areas
New sum of building areas
453.18 m² 2402.56 m²
² .
²
= 1.04
704.08 m²
2500 m²
1342.73 m²
However, it is possible that through this scaling process some values step out of the threshold of 304 m² to 1400 m². If that is the case, the value which is too small/ big gets replaced by the according threshold value. The difference between the resulting new sum of the gross floor areas to 2500 m² is divided by the number of buildings
45
3 Case study which were within the threshold and added to/ subtracted from these, ensuring that all new values stay within the threshold.
Building floors As the individual building areas are defined, the next step is to assign the number of floors to each building. The number of floors is another form variable which is parametrically (sliders) controlled. At this point, it is not enough to determine a threshold of two to four floors. One scenario could be that a building gets assigned four floors, but when its gross floor area is divided by four, the area of each floor may be smaller than the minimum floor area defined previously. In order to avoid this scenario, the maximum floor number of each building needs to be identified. Therefore, the individual building area gets divided by the minimum floor area, and the result is rounded off. Consequently, if a parameter delivers a higher floor number than the maximum floor number of the building, the smaller value of the two is taken as the final floor number. One example of this scenario is as follows: A building area of 453.18 m² is divided by the minimum floor area of 152 m². The result of 3.06 is rounded off to three floors which is the maximum floor number for this building. If the parameter sets a floor number of four, the maximum floor number of three will be applied because it is the smaller of the two.
Building dimensions After the floor numbers, the building dimensions can be determined. The area of each building is divided by the number of floors which delivers the area per floor. Since the area of a rectangular footprint can be calculated by multiplying the xdimension with the y-dimension, it is possible to define only one dimension if the floor area is known. The other dimension is then automatically defined. Following this logic, parameters are needed for only one dimension to manipulate the footprint and generate design variants. Therefore, the x-dimension of each building is set up as a parameter. Thresholds for the building dimensions are important for generating buildings which fit into the buildable area on site. To ensure that the maximum xdimension is set to 69.00 m (site length of 75.00 m minus the set-back for each side
46
3 Case study 2 x 3.00 m). The same calculation applies to the y-dimension which is set to a maximum of 49.56 m. The lower threshold is 8.00 m which is necessary to avoid extremely narrow building footprints. For making best use of passive design strategies, such as daylighting, one of the two building dimensions should not exceed the threshold of 15 m. The second dimension can be bigger or smaller but must remain within the threshold of 69.00 m/ 49.56 m. This generation rule ensures that very long and wide buildings are not generated. Without this rule, buildings would be created where a large part of the floor area is not accessible by daylight. That would require more artificial lighting what increases the energy demand of the building. A scaling process for the parameter’s numeric range defined by the thresholds is applied. It is the same range of 0.00 to 1.00 that was used to scale the individual building areas. Accordingly, the range of 8.00 m to 69.00 m is scaled to 0.00 to 1.00 in order to achieve an appropriate step size of 0.61 m because the step size of 1 m is slightly too big for the scale of building footprints. As soon as a value from the new range is parametrically selected and the corresponding x-dimension in meters identified, the floor area (also referred to as footprint area) can be divided by it. There are three main scenarios to be differentiated in relation to the resulting dimensions: The examples in Table 2 demonstrate these scenarios. (1) If the resulting y-dimension is smaller than 8.00m, it gets replaced by the threshold of 8.00 m. Consequently, a reconsideration of the x-dimension is necessary in order to maintain the footprint area constant. Therefore, the footprint area is divided by the new y-dimension, and the result is a new x-dimension. (2) Another scenario occurs, when the resulting y-dimension is bigger than the threshold of 49.56 m. Then again, this value gets replaced by 49.56 m, and the footprint area divided by it what delivers a new x-dimension. (3) The third main scenario contains an x- and a y-dimension which are both within the thresholds of 8.00 m to 69.00 m, and 8.00 m to 49.56 m. However, if both exceed the threshold of 8.00 to 15.00 m, one of the two dimensions gets changed. The choice, of which one of the dimensions needs to be within the threshold, is made depending on the difference between the dimension value and the threshold value which is closest to it. The dimension with the least difference to a threshold gets replaced, and the footprint area divided by it. In this way, a new x- or ydimension is created.
47
3 Case study Table 2 Three main scenarios for the definition of building dimensions Scenario
Parameter x-dimension
Footprint area
Resulting y-dimension
Division footprint/ dimension
Resulting dimensions
(1)
8.70 m
700 m²
80.46 m
700 m² / 49.56 m = 14.12 m
x = 8.70 m y = 14.12 m
(2)
68.35 m
152 m²
2.22 m
152 m² / 8.00 m = 19.00 m
x = 19.00 m y = 8.00 m
(3)
25.40 m
700 m²
27.56 m
700 m² / 15.00 m = 46.67 m
x = 15.00 m y = 27.56 m
Positioning building footprints on the site For optimization, the positions of the buildings need to be determined parametrically to ensure that they can be arranged differently. The bottom left corner point of each footprint is determined by the x-position and the y-position. As the site is of rectangular shape and oriented to the north, this setup is comparable to a coordinate system. Therefore, the position of each footprint can be regarded as a point defined by an x- and y-position. In order to parametrically manipulate the position of the footprint, the x- and y-position are set up as parameters. To position the footprints within the set-back boundary, thresholds for the x- and y-position values are required (see Figure 15). The numeric range for the x-position is defined as follows: The lower threshold depends on the set-back width, which is 3 m. Since the set-back is the same on all sides, it affects the upper threshold as well. From the site length of 75 m, the set-back of 3 m plus the x-dimension value of the footprint are subtracted to calculate the upper threshold value. The same procedure applies to the y-position, yet with different values. The lower threshold is the same as for the x-position. For defining the upper threshold, the set-back and the y-dimension value of the footprint are subtracted from the site width of 55,56 m. The resulting two ranges determine the area where the bottom left corner point of the footprints can be located on site to keep the building within the buildable area. This area changes depending on the footprint dimensions. In order to maintain the same numeric range for the x- and y-position parameters, a scaling process similar to the ones explained in previous paragraphs is applied. The constantly changing ranges of the x- and y-position are both scaled to the same range of 0.00 to 1.00. Parameter (slider) ranges need to be defined before starting an optimization since they are fixed and cannot be altered within the process. Without scaling, the ranges of the x- and
48
3 Case study y-position would be determined by the minimum lower and maximum upper threshold in regards to all possible footprint dimensions. This overall range would not change accordingly to the building dimensions. Consequently, that approach would neglect the direct relationship between x-/ y- position ranges and footprint dimensions. Depending on the different ranges and dimensions, numerous design variants would be generated which cross the set-back boundary. In this case, they could be called invalid or another process would be required to move them into the buildable area. By applying the scaling process as explained above, such problems can easily be avoided. When parameter values from the range of 0.00 to 1.00 are selected, the corresponding x- and y-positions on the site are calculated. That induces the placement of the footprints according to the position values. However, there is a high chance that the generated footprint configuration contains footprints which overlap. It is not possible to solve this problem by assigning thresholds. Therefore, a different approach is needed. Such an intersection solving approach is explained in the following paragraph.
Figure 15 Positioning building footprints on the site
Resolving intersections between footprints When footprints are placed on site according to parameter values, it is likely that they overlap. In order to resolve intersections, an algorithm is required which moves the footprints by the least distance to their initial position. This is important for
49
3 Case study keeping the relationship between the new position and the initial parameter value as close as possible. In that way, the EA can recognize better what positions are beneficial to the performance. By selecting values close to a position which seems to increase the performance, the optimization process aims to find even better positions. A resolving algorithm which randomly moves the footprints to find positions where they do not intersect would hinder the optimization process for the explained reasons. Therefore, a loop algorithm determined by intersection vectors was developed. A loop is able to repeat processes in an iterative cycle by taking the output data of the previous iteration as the input data for the next iteration. Such a method is beneficial for resolving intersections because the moving of footprints according to vectors retrieved from the intersections, can lead to new intersections. Especially when one footprint is affected by multiple intersections, the vectors coming from each of the intersections can have different directions or be even of contrary orientation. Since one movement of the footprint can be determined by only one vector, all vectors that affect this footprint need to be added to a single vector. If the vectors within this addition are of contrary orientation, it is likely that this vector cannot resolve all related intersections in one iteration. Therefore, multiple iterations are required. Furthermore, when footprints are moved, new intersections can occur because the vector used for the previous intersection does not consider further possible intersections that can come with the movement. In regards to architectural intentions and/ or to building regulations, a minimum building distance can be applied. It is added to each footprint as an outline with an offset distance of half the building distance (see Figure 16). In this case study, the minimum building distance is 4 m, what indicates an offset distance of 2 m for each footprint.
Figure 16 Minimum building distance represented by footprint offsets (red)
50
3 Case study
An intersection resolving algorithm can be programmed as follows (see Figure 17): First, it needs to be tested whether any intersections occur (1). Since minimum building distances are added to the footprints, they count as the new outlines of the footprints for the next steps. Consequently, the intersections are identified for these outlines. If intersections are detected, each building gets assigned a number according to its position in a list which contains all footprints of this configuration (2). In this case study, a maximum of four buildings can be positioned on a site what indicates that all possible intersections between one footprint and one other can be listed. A simple way of identifying these potential intersection pairs without missing out any is to make a table and insert all building numbers (see Table 3). Crossing out table fields prevents the repetition of pairs. This method is especially useful for a higher number of buildings. The potential six intersection pairs retrieved from Table 3 are: 1-2; 1-3; 1-4; 2-3; 2-4; 3-4. These are required for several sorting processes in the algorithm.
Table 3 Potential intersection pairs of four buildings (white fields) Building number 1
1
2
3
4
2 3 4
Occurring intersections (3a, b, c) can be regarded as two-dimensional closed curves which can be decomposed into two pairs of parallel lines (sides). Furthermore, a center point of the curve can be determined (4). This is helpful for the creation of vectors for the footprint movement. Vectors can be understood as directed line segments with a specified length. If two vectors are collinear, i.e. parallel, they can be either of equal or contrary orientation. Starting from the center point, two lines to the midpoints of two non-parallel sides of the intersection are drawn. These are converted to vectors which are perpendicular to each other (5). From the two vectors, the shorter (6[1]) and the longer (6[1]) vector are identified and arranged in two separate lists. If both have the same length, they still get divided into the lists.
51
3 Case study The reason behind this separation is to allow the loop to alternate between two approaches after a certain number of iterations (e.g. 100). First, the shortest vectors of all intersections in one configuration are taken to calculate the footprint movements. This approach relates to the previously explained strategy of keeping the relationship between the new position and the initial parameter value as close as possible. However, in some cases this approach alone might lead to stagnation which occurs when footprints cannot move because of vectors of contrary orientation. Then, a change of strategy which is applied after a defined number of iterations (e.g. 100) might help to resolve the stagnation. This change is embodied in the second approach of taking the longer vectors instead of the shortest for a certain number of iterations (e.g. 100). If after these iterations the intersections are not eliminated yet, the shortest vectors are taken again. This process repeats either until the intersections are resolved, or the termination condition is reached which is the maximum number of iterations (e.g. 1500). After determining which vector to take for further processes depending on the iteration number, the contrary vector is generated (7). For both approaches the following steps are the same; however, for explanation purposes the shortest vector is selected. At this point, each intersection of the footprint configuration has a shortest vector and its contrary vector, i.e. a vector with positive and a vector with negative orientation. These vectors can either be horizontally (8a) or vertically (8b) oriented. As each intersection occurs between two footprints (9a, b), the vectors can be assigned accordingly to one footprint each. That happens by sorting the footprints by their x- or y-position, depending on the direction of the vectors. For the horizontal vectors, the x-position is relevant (10a, 11a) and for the vertical vectors the y-position (10b, 11b). Consequently, to the footprint with a higher x/y-position value, the vector with positive orientation is assigned (11a, 11b) and to the footprint with a lower x/y-position value, the vector with negative orientation (10a, 10b). This logic can be compared to a bounce-off effect applied to the footprints. By using specific vectors for this effect, the footprints do not have additional space between them as it might be the case if a real physical bounce-off effect would take place. Instead, they are attached to each other taking into account the minimal building distance to minimize the difference between the new and the initial position.
52
3 Case study The next step is to regard all the intersections that affect one footprint and to create one movement vector. A footprint can have intersections with multiple footprints and every intersection delivers a vector for it. This implies that the vectors which get assigned to one footprint (12) need to be combined to one vector by using mathematical operations (13). First, it is necessary to check whether vectors with the same direction and orientation exist. If they do, the shortest vector of these has to be identified and eliminated. This prevents that vectors with the same direction and orientation add up to an even longer vector. In particular, an addition of these vectors is not necessary because by moving the footprint according to the longer vector, the movement defined by the shorter vector is already implied. After eliminating the shorter vector, the remaining vectors are added with paying special attention to the orientation of the vectors. If two vectors are of contrary orientation, the shorter vector is subtracted from the longer. The resulting vector for each individual footprint determines the movement of the footprint. When the buildings are moved to their new position, it is possible that they move beyond the set-back boundary (14). In this case, they need to be moved back into the buildable area. Therefore, a very similar approach to the positioning of the footprints on the site is applied (see Figure 15). For each footprint two numeric ranges are defined: one for the x-position and one for the y-position. The thresholds of the ranges are determined by the set-back of the site and the footprint dimensions, as explained previously. In this process, the building distances are not considered because the building can be positioned directly at the set-back boundary (15). Next, the x-position value and the y-position value are tested for inclusion in the according range (16). If a position value is not within the range, the threshold with the least difference to it is identified. Subsequently, the position value is replaced by the threshold value. This process delivers the new position where the footprint is moved to (17). After moving all footprints which were positioned beyond the buildable area, it is necessary to test whether any intersections occur between them. The intersections refer to the minimal building distance outlines (18). If all intersections are resolved, the loop ends. However, if any intersections can be detected, a new iteration starts. It takes the newly positioned footprints as the input for the iteration. If, after 100 iterations, the intersections are not resolved, the longest vector is selected at the beginning of the process, as explained
53
3 Case study previously. After 100 iterations with the longest vector, the shortest is selected again and so on. The loop ends either when all intersections are resolved or the maximum iteration number of 1500 is reached.
Figure 17 Resolving intersections workflow
54
3 Case study
Building cores The cores contain the main vertical (staircases) and parts of the horizontal (hallways) circulation of the buildings. One example of a possible core layout is shown in Figure 18. In the course of this case study, a simplified representation of the cores is accurate enough for optimization purposes and facilitates their positioning. Therefore, in the floorplans they are shaped as squares with the dimensions of 4 m x 4 m. In order to ensure natural lighting and ventilation within the cores, they can only be positioned at the building façade, preferably in moderately shaded areas. Locating the cores in shaded areas of the building benefits the optimal use of passive environmental systems such as daylight. It is important that especially living spaces are accessed by daylight. That can reduce the energy demand caused by artificial lighting and increase solar gains what decreases the heating energy demand. Therefore, it is useful to position the cores in shaded areas to leave the well-lit areas to the living spaces.
Figure 18 Possible layouts of the circulation cores
The positioning of the cores can be determined as follows: Depending on the footprint size the number of circulation cores increases. If a footprint’s dimensions exceed 15 m x 20 m, it gets divided into two parts and a core is placed in each of them. The same logic applies to bigger buildings where more cores are positioned. The positioning is not controlled by x- and y-positions, but rather by potential position points. This approach is more efficient because it reduces the number of parameters. Since the cores can only be positioned along the façade, it is useful to define points for all potential positions of the cores. Each core’s position can be controlled by one parameter which contains the potential position points as numbers in a list. By selecting a value from this list, the position of the core is set. The potential positions indicate the locations of the core center points (see Figure
55
3 Case study 19). Position points can be differentiated in corner points and edge points. In order to identify the corner points, the footprint’s vertices are moved towards the center of the footprint by half the x- and y-dimension (2 m) of the cores. Edge points are defined by moving the footprint edges towards the center of the footprint by half the dimension (2 m) of the cores. However, the lengths of the moved footprint edges have to be reduced, e.g. by 4 m. That ensures that the space which is located between a core and the two façade sides perpendicular to the one which the core is attached to, is big enough to allocate reasonably sized apartment rooms. Next, edge points are arranged on the trimmed edges with a constant distance of 1 m. Consequently, the number of edge points is dependent on the dimensions of the footprint. The list of potential positions for each core determines the number of potential parameter values. After positioning the cores in each footprint, the edges of the cores which are attached to the footprint outlines are extruded vertically by the individual building heights. For visualization purposes, the core footprints are displayed on the roof levels.
Figure 19 Core positioning by points
Building volumes and windows For visualization purposes, a simplified 3D model of the building configuration is generated. In order to create building façades, the footprint outlines are extruded by the floor height (3 m) multiplied by the number of floors. The parts of the footprint outline where cores attach are not included in the extrusion process. By
56
3 Case study elevating a copy of the footprint to the roof level, the roof is produced. To display the floors, copies of the footprint outline are elevated by the floor height according to the number of floors.
Figure 20 Building volumes
Since the windows are not a focus of the case study, basic ribbon windows are used. To generate these, copies of the footprint edges are created and the edges are trimmed on the ends by 0.5 m what represents a potential wall thickness. The trimmed edges are elevated by a parapet height of 1,1 m. After that, they are extruded by the window height which depends on the window-to-wall ratio. In this case study, the window-to-wall ratio is 30 percent. In regards to the longest and shortest wall possible in this setting, the window height ranges between 0.92 m and 1.0 m. Copies of the windows in the first floor are arrayed vertically by the floor height according to the number of floors in each building. Lastly, the windows are subtracted from the building faรงades (see Figure 21).
57
3 Case study
Figure 21 Ribbon windows
The described parametric GM offers the possibility to generate a vast amount of design variants which form the variant space for a design problem. This space can be expanded or restricted by changing parameters, generation rules, form variables, etc. In order to explore the topic of sequence in the MD process, the GM focuses on a few building components, such as building volumes with their position on site and the position of the cores within the buildings. After a design variant is generated, it needs to be analyzed to enable comparison between the different design variants.
3.2 Building model analysis Analysis methods offer the foundation for design evaluation as they analyze different aspects of design variants. The analysis result of a design variant can either directly serve the evaluation as the performance value, e.g. LCP, or be a part of an equation which calculates the performance value, e.g. solar radiation analysis. In this chapter, building analysis methods are explained. They form the EM for the case study and are the basis for formulating performance goals. The setup which is necessary to conduct a solar radiation analysis is described and a newly developed tool for energy and LCP analysis is shortly introduced.
58
3 Case study
3.2.1 Solar analysis This chapter explains how a solar radiation analysis can be set up in Grasshopper, its advantages and how it can be used for design evaluation. In terms of occupant thermal comfort and energy demand of buildings, solar radiation is an important factor to consider. Solar radiation analysis is a fast way to evaluate the solar radiation on geometries, e.g. building façades, sites. It takes into account shading from surrounding buildings and calculates solar radiation values for a specified time frame. These values show to what extent the geometry is exposed to sunlight and indicate the potential amount of daylight in the interior. The analysis is conducted by a plug-in for Grasshopper named Ladybug which uses weather data in EnergyPlus weather (.EPW) format. In this case study, this method is preferred over other daylight simulations (e.g. DIVA plug-in for Grasshopper) since it is faster. Ladybug simplifies the analysis process, automates and expedites the calculations (Roudsari et al., 2013). It uses GenCumulativeSky (Robinson & Stone, 2004) to calculate a sky dome for solar radiation. The results are saved on the computer and can be used for numerous analysis. Since the sky calculation needs to be conducted only once for a location and a specified period of time, an enormous amount of time can be saved during the optimization process. The sky calculation which can take up to a few minutes does not need to be repeated for every design variant. This operation logic makes Ladybugs’ solar radiation analysis a reasonable choice for the case study. The level of accuracy it provides is sufficient for the purpose of comparing design variants. The analysis process within the case study is set up to calculate the solar radiation for a time period of one year. As data input, it requires weather data for a specific location (Berlin, Germany) which is freely available online (EnergyPlus, 2016), the geometries (building façades) which are to be analyzed, and shadow casting objects (surrounding buildings). The building model should be oriented towards the ydirection in the coordinate system where it is generated because by default this is regarded as the north direction in Ladybug. Next, an analysis grid with the corresponding analysis points is defined (see Figure 22). By inputting the geometry in Ladybug, it automatically generates an analysis grid which can be manipulated by setting the cell size (1 m). It is worth mentioning that the calculation time increases
59
3 Case study with the number of analysis points. After setting up all required input, the analysis process can be started. It takes a few seconds to deliver results. In this case study, the analysis process takes less than 5 seconds, yet the calculation time is dependent on the computer. The results are color-coded and visualized on the analyzed geometry, i.e. on the building faรงades. Ladybug uses the grid cells for color-coding. A legend translates the colors into solar radiation values. The color red means that a high solar radiation is identified, whereas blue indicates a low solar radiation value. A visual representation of the results is useful in terms of providing a fast overview of the solar conditions at a specific location. Furthermore, it demonstrates what influence the shadow casting objects, such as surrounding buildings, have on the analyzed geometry. In addition to the visual representation, a list of the solar radiation values for every analysis point is delivered. This data can be used to compare design variants by calculating evaluation values, such as the average solar radiation. A major advantage of the analysis points approach is that, by identifying specific analysis points (testing for inclusion), certain parts of the geometry can be evaluated separately, e.g. windows, building cores. Within the case study, the solar radiation analysis serves inter alia the LCA by providing the shading factors for the windows.
Figure 22 Solar radiation on the faรงades (a), Solar radiation on the faรงades and on the site (b), Solar radiation analysis points on the core faรงades (c), Solar radiation analysis points on the faรงades (d)
60
3 Case study
3.2.2 Life Cycle Assessment and Energy analysis In the case study, a previously developed LCA method is used. Its application and operation are roughly explained in this chapter.
In this thesis, a previously developed parametric LCA method is used for the calculation of the LCP (Hollberg & Ruth, 2016). This method is fully integrated into Grasshopper allowing the calculation of operational energy demand and embodied environmental impact of design variants. Since both, the LCA method and the GM, are programmed in Grasshopper, exporting and re-importing of data is unnecessary what enables the application of optimization processes. Furthermore, the parametric LCA tool is able to provide results in real-time (< 0.1 s). That facilitates finding optimal design solutions in a reasonable amount of time. The LCA method divides all necessary input into three categories, namely geometric information, nongeometric information and surrounding conditions. All of those are defined parametrically to permit quick adaptation to changing parameters. Additionally, variation can be created by altering input data. As geometric information, a bill of quantities needed for the calculation of embodied energy and environmental impacts is established. Therefore, surface areas are automatically extracted from a design variant model generated by the GM. These areas are sorted by zones, in which one zone corresponds to one building floor. All areas which are included in one zone are arranged in a list. This list is subdivided into the different zone component areas, such as exterior walls, ceilings, windows, etc., by taking into account the orientation of every surface (north, west, south, east, horizontal). The building cores are not taken into consideration as these are not part of the heating area of the building. Hence, they are subtracted from the building volume (see Figure 23). The directly adjacent walls of the cores to the interior of the building are treated as walls to unheated rooms in the list.
61
3 Case study
Figure 23 Subtracting core volumes from the heated building volumes
Non-geometric parameters such as material properties, the thickness of building components (e.g. exterior walls, insulation), HVAC systems, etc., are defined numerically. As for the materials, three kinds of data are necessary: environmental data, reference service life data, and physical properties. The environmental indicators for the individual materials are taken from รถkobau.dat (BBSR, 2011) which complies with EN 15804 (CEN/TC 350, 2012). In order to reduce the effort of tedious data input, a catalog of the most common building components is established. In this case study, six common construction types are chosen for LCP calculation. These are: External Thermal Insulation Composite System (ETICS), Brick construction, Concrete construction, Wood construction, Ventilated faรงade system and Double shell masonry system. The impacts resulting from replacement are calculated by taking into account the reference service life of each building component. Surrounding conditions, such as climate or user data, are taken from standards. The shading factor which indicates to what extent the windows of one zone are shaded by surrounding buildings is derived from the solar radiation analysis (see chapter 3.2.1). Therefore, points with a specified distance of 1 m are arranged on the middle line of each ribbon window (see Figure 24). One closest point from the solar analysis points to each of the middle points is identified. By using this method instead of testing the analysis points for inclusion in the window surfaces, the solar radiation can be determined at specified points on the windows where most daylight enters the interior. From the selected analysis point values, the average solar radiation for
62
3 Case study all windows of one zone is calculated. The shading factor is computed by scaling the average solar radiation value according to the range between the minimum (completely shaded) and the maximum (no shading) possible solar radiation values. Implicated in the operational energy demand, the resulting shading factor assigns individual shading conditions to each zone.
Figure 24 Determining the shading factor for each zone by identifying analysis points on the windows
The operational energy demand of a building can be calculated by either using dynamic building performance simulation (DBPS) or quasi-steady state methods (QSSM) (see chapter 2.3). DBPS delivers accurate results but requires longer computation time. QSSM are easier to apply and much quicker; however, they neglect certain effects. Nevertheless, for the estimation of the energy demand in early design stages QSSM are accurate enough. Therefore, these are used in the case study. To apply a QSSM within Grasshopper, parts of DIN V 18599-2:2011 (DIN, 2011) which are relevant to residential buildings are implemented and the results verified (Lichtenheld et al., 2015). Environmental impacts resulting from energy consumption during the use phase of a building are calculated by multiplying the energy demand with environmental impact factors depending on the HVAC systems employed. Finally, the results of the individual life cycle modules (production, construction, use, end of life, benefits) are aggregated and show the total environmental impact of the building design (LCP).
63
3 Case study For optimization purposes, a single score indicator for the environmental impact is preferable in order to avoid optimizing towards multiple environmental criteria. Thus, a weighting of these criteria is required which, according to ISO 14040 (ISO, 2009), is based on value-choices and therefore not scientifically based. The LCA tool allows the definition and adaptation of own weighting factors for the advanced user. In that way, individual goals of the study can be considered. In addition, it is possible to employ different predefined weighting factors, e.g. those of building certification systems (Hollberg et al., 2016). Kägi et al. (2015) report, that decision makers always have to aggregate the LCA results in order make a decision on it. Furthermore, they discuss that it might be of benefit to provide a single score for decision makers instead of letting them make the weighting on their own. For the LCP calculation in the case study, a weighting process included in the DGNB certification (DGNB, 2016) is utilized. Therefore, evaluation points (Bewertungspunkte, BP) are calculated for both criteria related to LCA, namely ENV 1.1 and 2.1. The BPs are weighted according to the DGNB system for residential buildings and combined into one value which is called the weighted BP (WBP). In this thesis, the WBP is the measure for the LCP. The maximal WBP value which can be achieved is 1.35 (Hollberg, 2016). Since one building configuration in the case study contains two to four buildings, an LCP value is calculated for each of the buildings. However, the buildings need to be assessed as one complete configuration for optimization. Hence, a weighted average is calculated from the individual LCPs. The weighting is determined by the gross area of each building. For the following, the weighted LCP for a design variant is named LCP. Figure 25 illustrates the schematic structure of a parametric LCA model as well as the workflow of the LCA process. More specific information on this parametric LCA tool can be found in Hollberg (2016).
64
3 Case study
Figure 25 Schematic structure of the workflow of a parametric LCA (Hollberg, 2016, p. 101)
3.3 Multi-stage decision-making trees In this chapter, MD trees are defined and their major properties mentioned. It is explained how data needs to be stored in order to be automatically loaded into MD trees and what types of data are included. Furthermore, the creation of MD trees is described. Another topic included in this chapter is the comparison and assessment of data across multiple MD trees. Rittelâ&#x20AC;&#x2122;s (1992) theory of the MD process (see chapter 2.1) implies the need of visualizing produced design solutions according to the design stages used for the process. MD trees serve this need as they provide a clear overview of the generated data by arranging it in a tree-like structure. This makes comparing results fairly easy for the planner. The design phases are represented by stages and the alternative design solutions in these stages are organized by branches. As MD trees are controlled parametrically, a vast amount of data can be visualized with least effort. That makes this method extremely time efficient. Saving the data of the design solutions according to a defined file name structure, and including the resulting file paths in the algorithm, enables automated data input. Moreover, the MD trees are
65
3 Case study updated in real time as soon as new results are saved or changes applied to the already existing ones. The structure of the file names is derived from the number of the MD tree, and the number of stages plus branches. One example for a typical file name is T1_3_1, where T1 stands for the number of the MD tree, _3 for the third branch in the first stage and _1 represents the first branch in the second stage. That structure shows the exact position of a design solution by assigning a path in the MD tree. In this example, it is starting at the initial point of MD tree 1, then taking the third branch in the first stage and from there, following the first branch in the second stage. According to this logic, the file names get longer the more stages an MD tree has. However, the length of the file name depends on the position in the tree, meaning that a solution in the first stage of a tree has a shorter file name than one in the last stage. Applying this file name structure to images, a differentiation between these for one design solution in one stage is needed in order to demonstrate various aspects of a design solution, e.g. perspective view, plan view, etc. Therefore, a letter is added to the file name, such as _A for the perspective view and _B for the plan view. Due to this differentiation, all images in one stage of an MD tree can be changed by altering only one letter in the algorithm. This saves a lot of time in comparison to manual image input. Right after the optimization process, the numeric data of a design solution is saved as an Excel file applying the explained file name structure. This data contains parameter values for geometry generation, analysis, and performance values. It can be used as starting information (parameter values) for subsequent stages, e.g. defined volumes for core positioning. However, it is not necessary to input these Excel files in each stage of the MD tree to display the numeric data. In order to reduce computation time for loading numerous Excel files, only the files of the last stage where optimization was involved are loaded. Since they accumulate all data from the previous stages, it is possible to retrieve the required numeric data for each stage from these. A parametric MD tree can be constructed as follows (see Figure 26): First, an initial point is defined by x- and y-position parameters in a coordinate system which Grasshopper operates on. From this point, a copy of it is moved in the positive xdirection. Copies of this new point are arrayed in negative and positive y-direction
66
3 Case study using a parametrically defined distance. The number of these copies is determined by the number of design solutions in the first stage. By connecting the initial point with the arrayed points, the branches of the first stage are created. If the MD tree has more than one stage, the next steps are applied. Starting from the new points, copies of these are moved in the positive x-direction to leave space for the design solution images of the first stage. Then the same creation of branches begins from the moved points. The number of branches starting from each of the moved points corresponds to the number of design solutions generated from each of the design solutions of the previous stage. This process is repeated until the final number of design stages is reached. In the last stage, copies of the points are moved in the positive x-direction and connected to the previous points by lines. At the end of these lines, evaluation results, such as the LCP, are displayed. In that way, all performance values are visualized at the end of the trees enabling a fast overview. The positioning of the images and numeric data is determined by reference points. These points depend on the parametrically defined points which form the branches of the MD tree. Parametric dependencies enable the adjustment of the trees in size, distances, etc. with least effort by manipulating parameter values. For instance, if the initial point is moved, the complete MD tree moves while keeping all the other parameters constant.
Figure 26 Structure of an MD tree (design solutions represented by rectangles except the initial rectangle; branches represented by lines)
67
3 Case study Since all MD process results are loaded in the MD trees, the design solution data can be assessed and compared in real time. The average, minimum and maximum values, etc. can be computed automatically, providing fast numeric evaluation methods for the planner. This approach makes time-consuming manual creation of data sheets unnecessary. In the case study, the LCP values are color-coded. The smallest LCP is marked red and the biggest is green. In order to visualize the LCP values of all design solutions across all MD trees in comparison to each other, a color gradient from red to green is applied to the lines which are in front of the LCP values. Therefore, the minimum and maximum LCP across all trees are identified and set up as the thresholds for the numeric range of the color gradient. Each LCP gets assigned a color depending on its value. This method provides a quick visual overview of the results and is more time-efficient than comparing the values by regarding numbers. Moreover, it shows tendencies such as which design solutions lead mostly to best or worst LCP, e.g. configurations of two buildings tend to deliver better LCP values than configurations of 4 buildings. The numeric and the color-coding method for assessing and comparing design solutions across multiple MD trees are used in the case study for exploring the influence of sequence on design solutions in the MD process. These methods are especially useful for the case study since it involves multiple MD trees, each representing a different sequence.
3.4 Design sequences and optimization processes This chapter introduces the different sequences used in the case study to target the questions posed in the introduction. It explains how optimization processes are set up and which further evaluation factors besides the LCP are applied to compare the design solutions of different MD trees. The case study aims to explore the impact of stage order on design solutions in the MD process. Therefore, multiple research questions were posed in the introduction which structure the overall topic in subtopics. Each subtopic includes two MD trees which involve the same three components of the design, yet in a different order. The components chosen, represent increasing levels of detail in design which corresponds to the design map of Markus (1969) and Maver (1970) (see chapter
68
3 Case study 2.1). The building volumes and their position on site embody the least detailed level of the three, followed by the cores and the construction types. The latter have the highest level of detail because they contain information about the materials, e.g. layer thicknesses, physical properties, and how these are assembled. However, is it beneficial for the design solution performances to use the increasing level of detail for determination of design stages in the MD process, or does another sequence deliver better results? This is one of the main questions which form the basis for the case study.
Optimization process in Grasshopper A sequence is defined by the number of stages and their order. Each stage can contain optimizations or preset components, such as construction types. Depending on the design sequences, optimizations need to be set up for each stage individually where they are to be conducted. In the case study, EA are used for these optimization processes (see chapter 2.2). Grasshopper contains an inbuilt optimization component named Galapagos, which requires genomes and a fitness with an assigned goal in order to start operating. Genomes are all parameters which can be varied by Galapagos during the optimization to generate design variants. These, and a fitness are connected to the component. The fitness is a numeric mathematical function which can include multiple performance criteria, such as average solar radiation and Surface Area to Volume Ratio (S/V). These criteria are combined by means of addition. Depending on the optimization goal, a positive or negative prefix is assigned to each criterion. The optimization goal can either aim to maximize or minimize the performance results calculated by the fitness function, or to achieve a fixed performance value. Consequently, if the optimization goal of the complete function is set to maximization, each criterion which is to be maximized has a positive prefix. In contrast, each criterion that should be minimized gets a negative prefix. The opposite is the case if the optimization goal aims for minimization. Furthermore, all criteria need to be scaled to the same numeric range to avoid overly dominant behavior of some criteria towards the other criteria. In the case study, a range from 0 to 1 is applied. Additionally, these criteria can be weighted according to their significance to the performance. However, the difficulty of the weighting processes is to identify how these criteria influence the
69
3 Case study performance. For example, it is known that the S/V impacts the LCP, but it is not clear how it should be weighted against other criteria like solar radiation in the fitness function in order to represent its significance to the LCP. This problem can be tracked down to the characteristics of design problems, in particular, multidimensionality and interactivity (see chapter 2.1). The fitness function delivers a performance value for each generated design variant which is crucial for optimization processes. Further adjustments of the optimization, such as termination conditions, number of individuals per generation, etc., can be made within Galapagos (see Figure 27).
Figure 27 Galapagos optimization setup
If an MD tree has more than one stage where optimization is involved, multiple optimization processes need to be set up as described above. Each design stage requires a new determination of the genomes, the fitness and the optimization goal since with each stage another part of the design is optimized. This can be exemplified by an optimization of building volumes and their position on the site, which requires a specific set of parameters (genomes) in order to vary these two aspects. In contrast, the building cores positioning which is based on position points is controlled by a different set of parameters. It is worth mentioning, that with each parameter the variant space expands. This implies, that the number of parameters as well as their numeric ranges should be kept as small as possible to decrease the
70
3 Case study time for finding design solutions. However, that holds the risk of missing out good solutions. Therefore, the parameters and their thresholds need to be considered carefully. In this thesis, the optimization results of each stage are named design solutions. The design solutions in the last stage which are the product of all stages are called overall design solutions. As alternative design solutions are regarded all solutions provided in one stage. Figure 28 shows an optimization process conducted by Galapagos. The visualization is reduced to the minimum what saves computation time. The yellow and red graph display the fitness values of each generation. Marked with a blue rectangle, generation 26 to 54 show steady improvement of the fitness values, whereas generation 18 to 24 display minimal improvement. This graph implies that the number of generations in an optimization process matters significantly. If for example, this optimization would have ended at generation 23, the design solutions with the better performance values would not have been generated. However, depending on the algorithm, only a certain number of generations can be processed in a reasonable amount of time. The planner chooses the time frame for the optimization process. This time frame is highly dependent on the number of genomes and their numeric ranges. Even after extremely long optimization processes which may take a few days to terminate, cannot be guaranteed that the optimal solutions are found and no better exist in the variant spaces. Nevertheless, EA might find design solutions which can be regarded as good enough (satisfying) for the design problem. The green bars show the fitness values of the generation in the process, and after termination, they display the design variants of a selected generation. This makes it possible to choose different design solutions from one optimization process if multiple designs delivered the same fitness value. By regarding design variants from this list and their fitness values, the planner may gain knowledge about the design problem.
71
3 Case study
Figure 28 Optimization process conducted by Galapagos
After a design solution is produced, the parameter values, analysis results, and further important information about the design are saved as Excel files. These files can be used as the starting data for next stages and as MD tree input data for visualizing and evaluating design solutions in the context of all generated solutions.
Design sequences applied in the case study According to the questions posed in the introduction, the case study is used for exploring the impact of sequence on design solutions in an MD process. For each of the four questions, possible sequences are developed. Three design components, which is only a fraction of the number of components involved in practice-oriented design, are included in the case study. A major reason for this minimalistic approach is to is enable changing stage order easily. Furthermore, incorporating a low number of performance influencing components facilitates discovering the reasons for possibly different design solutions between the MD trees. The sequences applied in the case study are the following: (1) The first topic addresses the fragmentation of a design problem into subproblems by assigning design stages for MD processes. It focuses on the influence of defining
72
3 Case study separate stages for design components in comparison to combining certain components in one stage in regards to resulting design solutions. Therefore, two sequences are compared where each is visualized in one MD tree. In sequence 1 (MD tree 1), all three components are separated into three stages. Starting with the building volumes and their locations on the site in stage 1, continuing with the positioning of the building cores in stage 2, and concluding with assigning construction types in stage 3 is the stage order of sequence 1. This order corresponds to the level of detail of the components as described previously in this chapter. However, this stage order is not solely determined by the level of detail. Logical constraints need to be considered in this example since cores can only be placed within building volumes. That makes it impossible to optimize core positioning without the building volumes. Because of this high dependency of the cores to the building volumes, latter must be defined first in this sequence. In sequence 2 (MD tree 2), the building volumes and the cores are combined in one stage. This sequence is based on a hypothesis of the author and represents typical assumptions that planners may make of design solutions. The hypothesis refers to the solar radiation accessibility on the building faรงades. Generally, it can be assumed that in order to achieve acceptable solar radiation on all faรงades, the buildings may tend to maintain high distances to all shadow casting objects such as other buildings of the configuration and surrounding buildings (see Figure 29a). This would be the case for sequence 1 where the volumes and their positions are defined in the first stage. After the volumes are placed on the site, the cores would be positioned in the areas where least solar radiation is identified in order to leave the spaces with good daylight accessibility to the apartment areas. In contrast, the hypothesis states that purposely created shaded areas on the building faรงades for the core placement can improve the solar radiation. A higher solar radiation may lead to better LCP values. The idea behind this assumption is to decrease the distance to some buildings on the site to achieve highly shaded, yet small areas. In these areas, the cores can be positioned (see Figure 29b). By decreasing the distances between the buildings on the site, the distances to the surrounding buildings increase. That can reduce the shading on the building faรงades caused by surrounding buildings what simultaneously improves their solar radiation values. This hypothesis can only be considered if the building volumes, their positions, and the cores are optimized in
73
3 Case study the same stage. If they are divided into separate stages, the influence of the cores cannot be taken into account during the formation and positioning of the building volumes.
Figure 29 Autor's hypothesis
Resulting from this hypothesis, a fitness function is developed. Sequence 2 requires only one fitness function since both criteria which are to be optimized are included in a single stage. The six construction types in stage 2 are already predefined and do not need to be optimized in this case study. Generating design solutions with high LCPs is the aim of the optimization, however, it is not possible to set the LCP as the fitness and its maximization as the fitness goal at this point. The reason for that is the second stage of sequence 2 which contains multiple construction types. A combination of one design solution with six construction types delivers six different LCP results. However, an optimization can take into consideration only one performance value for one design variant. Therefore, the planner needs to assign a fitness function which does not contain the LCP but aims for optimization oriented towards the LCP. The first step for creating such a fitness function is to define important performance criteria which influence the LCP. Moreover, it has to be possible to analyze the design variants of the regarded stage according to these performance criteria. One major performance criterion is the Surface Area to Volume Ratio (S/V) which is an indicator for the compactness of a building volume. It is calculated by dividing the area of the building envelope by the volume of the building. The S/V is an important criterion for determining heat gain and heat loss of a building. That means, a bigger surface area induces more heat gain in warm weather conditions and increased heat loss in cold conditions. Consequently, a minimized S/V is of great benefit to the LCP as it reduces the energy demand of the
74
3 Case study building. In order to provide one S/V value for the fitness function, an average of all S/V in a building configuration is calculated. Another LCP influencing criterion is the solar radiation as mentioned earlier. It shows to what extent the building façades are exposed to sunlight and indicates the potential amount of daylight in the interior. Daylight accessibility of the building interior is of significance as it impacts the solar heat gains in cold weather conditions and energy consumption in regards to artificial lighting. Thus, a reasonable amount of daylight in the buildings improves the LCP. As a solar radiation value is calculated for each analysis point on the façades, an average solar radiation value needs to be identified for the complete configuration of buildings in order to create one value which can be included in the fitness function. However, the areas where the cores attach to the building façades are excluded from the average solar radiation value since they are not part of the apartment areas. That implies, that by placing the cores in shaded areas the average solar radiation of the remaining façade areas increases. In regards to the hypothesis of the planner, a shading factor for the cores is added. This factor is included to facilitate the creation of the previously explained shaded areas where the cores should be positioned to improve the average solar radiation of the buildings. A goal value for the average solar radiation of the façade areas where the cores are attached is set to 200 kWh/m² which corresponds to the definition of shaded areas. It is calculated by subtracting the average solar radiation of the core façade areas from the 200 kWh/m². Then, the prefix of the result is eliminated and the value is scaled to the same numeric range of 0 to 1 as the other criteria. The aim of this factor is to decrease the distances between the buildings. It is not set to 0 kWh/m² in order to enable the positioning of cores not necessarily at the most shaded areas of the façades. The reason for that is the author’s assumption, that the increased distance to the surrounding buildings can have a higher impact on the building’s average solar radiation, and therefore on the LCP, than the improvement of the building’s average solar radiation caused by core positions. This assumption includes weighting by determining which factor is more important than the other. Weighting is always included in a fitness function, even when the planner only scales the values to the same numeric range. All criteria involved in a fitness function have a certain influence on the LCP depending on their values, i.e. a higher value has a higher impact on the performance. Additional weighting can be applied
75
3 Case study if the planner assumes that some criteria are more important than others. In this case study, all three criteria included in the fitness function are not additionally weighted. The weighting resulting from the scaling process seems appropriate enough from the author’s point of view. Adding all criteria delivers the following fitness function:
Fitness = S/V + average solar radiation + shading factor. Next, the fitness goal needs to be assigned which determines the prefixes of the criteria in the fitness function. In this case, it is set to minimization. The adjusted fitness function for sequence 2 is the following:
Fitness = S/V - average solar radiation + shading factor. This function expresses, that the S/V and the shading factor aim for minimization, whereas the average solar radiation should be increased. In order to enable comparison of the design solutions resulting from sequence 1 and sequence 2, the fitness functions involve partially the same performance criteria. Since sequence 1 includes two optimization stages, accordingly two fitness functions are required. The first stage where the building volumes and their positions on the site are optimized, aims for minimization and has the fitness function of:
Fitness = S/V – average solar radiation of building volumes. The average solar radiation in this function is calculated from all solar radiation values on the façade because the cores are not included in this stage. For the second optimization in sequence 1 which aims for maximization, the following fitness function is used:
Fitness = average solar radiation. At this point, the volumes and their positions are already optimized and therefore, the S/V not included in the fitness. The average solar radiation of the façade can be improved by positioning the cores in shaded areas. The shading factor is not relevant at this stage of the sequence because the building volumes and their positions are already determined and their distances cannot be changed. Therefore, the shading factor is not included in sequence 1. The third stage in this sequence does not require optimization because the six construction types are predefined. This stage is
76
3 Case study the same as stage 2 in sequence 2 which makes using the LCP as the fitness function for optimization in the first two stages of sequence 1 impossible.
(2) The second topic serves the investigation of the impact of stage order on design solutions in an MD process. The initial sequence for this approach is sequence 2 from the first topic. Choosing this sequence facilitates the change of stage order as this MD process has only two stages. In stage 1 the building volumes, their positions on the site, and the cores are optimized. Stage 2 contains six different construction types which do not require optimization. By swapping these two stages, sequence 3 (MD tree 3) is created. Its first stage holds the six construction types, whereas the optimization of the building volumes, their positions on the site, and the cores is conducted in the second stage. As the optimization in this sequence is content of the last stage, it is possible to take the LCP as the fitness function and its maximization as the fitness goal. Consequently, the main difference of the solution generation in these two sequences lies in their fitness functions. The fitness function in sequence 2 is determined by the authorâ&#x20AC;&#x2122;s assumptions about the design solutions. In contrast, the fitness function of sequence 3 is fully determined by the LCP calculation.
(3) The subject of the third topic is the impact of a fixed fitness goal value on design solutions in comparison to a maximization/ minimization approach. In particular, the possible change of geometry is observed. This topic is based on an assumption of the author which states that a fixed LCP (LCP(x)) value can lead to different building volumes and building distances than maximizing the LCP (LCP(max)). During an optimization process towards LCP(max), the EA aims for the most compact building volumes in order to minimize the S/V. Furthermore, high average solar radiation values on the building façades are desired. This leads to high distances of each building to all shadow casting objects, such as other buildings on the site and surrounding buildings to minimize shading (see Figure 30b). Consequently, the optimum building form which can be generated by the GM is a cube with maximum distances to all shadow casting objects. However,
77
3 Case study if for example the goal value for the LCP is set to 0.8, some design solutions would have less compact building volumes and smaller distances to each other to achieve this value (see Figure 30a).
Figure 30 Fixed LCP (a), maximize LCP (b)
As the basis for this research serves sequence 3, since its fitness function is the LCP with LCP(max) as the fitness goal. Sequence 4 is identical to sequence 3 in all stages and in the fitness function. The only difference is the fitness goal which is set to LCP(0.8).
(4) The last topic questions the necessity of multiple stages in the MD process. Therefore, a single-stage approach, which combines all design components in one stage is explored within the case study. This approach is named sequence 5 (MD tree 5) and has the LCP as its fitness function and LCP(max) as the fitness goal. As explained in the first topic, optimization can consider only one fitness value for one design variant. Optimizing all building geometry components, such as the building volumes, their position on the site and the cores, and combining them with six different construction types, provides six LCP values for each design variant. In order to identify one LCP for the optimization process, the biggest value is selected and the corresponding construction type determined. This sequence is compared to sequence 3 in order to explore whether multiple design stages are really needed for creating well-performing design solutions.
78
4 Results
4 Results This chapter contains the generated design solutions for all sequences visualized in corresponding MD trees. Different sequences are compared in regards to their analysis data and geometries. In order to ensure systematic comparison between the sequences, several evaluation factors are established. They sum up the design solution data of each individual MD tree by creating average values. A more detailed level of evaluation is provided by calculating average evaluation values for each configuration of two, three and four buildings for each sequence. This enables an insight of how the number of buildings on the site influences the design solutions. The first and most important evaluation factor in this case study is the average LCP of a complete sequence. It shows how the design solutions perform in regards to the fitness goal. A higher LCP indicates better environmental performance. Additionally, the maximum and minimum LCP value is identified for each sequence. As mentioned above, the average LCP is also provided for the configuration of two, three and four buildings. The average of the S/V, the solar radiation on the faรงades and the solar radiation on the cores are further evaluation factors for design solutions. Another factor is the average building distance which is defined by measuring the distance from each building to all the other buildings of a design solution. In the next step, the shortest distance for each building is identified (see Figure 31). Based on the shortest distance values, the average building distance is calculated for each design solution. The average of these values is the average building distance for a complete sequence.
Figure 31 Shortest building distance (visualized as red lines)
79
4 Results
Another important information to keep in mind when evaluating design solutions is that slight deviations in the results are common for optimization processes which operate based on EA. These deviations are to consider before making conclusions about the different sequences. For conducting the case study, a standard computer was used. All stated calculation times refer to this setting. Dependent on the computer capability, these processes might take a longer or shorter amount of time to terminate.
4.1 Combining stages In this chapter, sequence 1 and sequence 2 are compared in order to explore the impact a combination of two design stages in one stage can have on the LCP of the design solutions. For sequence 1, nine design solutions for the building volumes and their positioning on the site are generated in the first stage, which corresponds to three solutions for each configuration of two, three and four buildings. The next stage contains three design solutions for core positioning for each of the nine designs from the previous stage. In the last stage, each of the solutions from stage two is paired with six different construction types. This makes a total of 162 design solutions in this sequence (see the MD tree in Appendix A.1). Even by starting with a low number of items in the first stage, the multiplications with the items of the following stages result in a high number of design solutions. This, on the one hand can be regarded as extremely beneficial for having a variety of design options to choose from for the planner. On the other hand, this process can easily become very time consuming if a high number of optimizations has to be conducted within an MD process. In regards to the latter, it is of importance to keep that in mind when determining the number of desired design solutions. The number of design solutions in sequence 2 is significantly lower than in sequence 1 due to the combination of two optimization stages in one. In the first stage of sequence 2, nine design solutions where 3 solutions are created for each
80
4 Results configuration of two, three and four buildings are produced. They involve the building volumes, their positioning on the site and the core positioning. In combination with the six construction types in the subsequent stage, a total of 54 design solutions is produced in this MD process (see the MD tree in Appendix A.2). It might seem logical, that the MD process with the higher number of design solutions takes much longer to terminate than the one with a lower number. However, both MD processes took approximately the same amount of computation time for completion. During the process, the author observed that the number of parameters in an optimization process has a significant impact on the time which is needed to find optimized solutions. The reason for this is the variant space which highly depends on the number of parameters used for optimization (see chapter 2.2). In sequence 1, the parameters for the generation of the complete geometry are divided into two sets according to the two stages where optimization is involved. This approach creates two smaller variant spaces which benefits the optimization process. Consequently, one optimization process in sequence 1 took less time than one in sequence 2. However, the higher number of optimizations in sequence 1 led to a similar computation time of the two MD processes which was about 3 days. The longest time for calculation required the intersection resolving loop and the solar analysis. Generating one design variant took around 3 to 50 seconds, mostly dependent on the iterations of the loop. The optimization process terminated after 40 generations without an improvement in the best LCP value. The evaluation values of both sequences are summarized in Table 4, whereas Table 5 provides a more detailed overview of the values by differentiating between the configuration of two, three and four buildings. It can be observed that the average LCP of 0.736 resulting from sequence 1 is higher than the average LCP of 0.724 from sequence 2. A high difference can be noted in the average building distances where the value of sequence 1 is nearly double of the value of sequence 2. In contrast, the average S/V values of both sequences are nearly the same as Table 5 shows. The average solar radiation on the façades as well as the solar radiation on the cores is higher in sequence 1. Closer to the solar radiation goal of 200 kWh/m² on the cores, is sequence 2 with an average solar radiation on the cores of 199.06 kWh/m². By regarding the values per building configuration of both sequences, a tendency becomes apparent. The configurations of two buildings have a better average LCP,
81
4 Results S/V, and solar radiation value than the configurations of four buildings. All configurations have the same volume due to the fixed gross building area and the floor heights. Consequently, the bigger envelope area is the reason for the increasing S/V. Solar radiation is highly affected by shading which is induced by the building distances. The fewer buildings are located on the site, the bigger can the distances between those get what improves the average solar radiation. In summary, a higher S/V and a lower solar radiation lead to a worse LCP. The MD trees of both sequences (see Appendix A.1, Appendix A.2) show a pattern in the individual LCP values which is caused by the different construction types. If for each individual design solution all LCPs resulting from the six construction types are sorted from the best to the worst, a specific order can be observed: wood construction, double shell masonry, brick construction, ventilated façade, ETICS and concrete. This order is constantly the same for both sequences. This indicates, that it is not dependent on the sequences. Instead, each construction type has a specific influence on the LCP.
Table 4 Evaluation values of sequence 1 and sequence 2 Average LCP [WBP]
Sequence 1 Min LCP Max LCP
Distance [m] -1
S/V [m ]
0.736
0.580 0.986
Sequence 2 0.580
0.724
0.956
8.50
4.52
0.352
0.352
Sol. rad. volumes [kWh/m²]
378.840
-
Sol. rad. [kWh/m²]
403.027
370.477
Sol. rad. cores [kWh/m²]
213.776
199.060
82
4 Results Table 5 Evaluation values per building configuration (2,3,4 buildings) of sequence 1 and sequence 2 Average per
Buildings
Sequence 1
Sequence 2
LCP [WBP]
2 3 4
0.800 0.727 0.679
0.785 0.723 0.663
Distance [m]
2 3 4
13.43 6.48 5.56
4.00 4.57 4.99
S/V [m-1]
2 3 4
0.313 0.353 0.390
0.315 0.356 0.384
Sol. rad. volumes
2 3 4
392.056 375.403 369.070
-
Sol. rad. [kWh/m²]
2 3 4
423.325 395.886 389.870
393.315 368.330 349.785
Sol. rad. cores [kWh/m²]
2 3 4
208.036 207.617 225.674
191.983 203.094 202.112
building configuration
[kWh/m²]
4.2 Changing the stage order This chapter is focused on the change in the stage order and the influence this has on the design solutions. Furthermore, the impact of a fixed LCP value as the fitness goal on the outcomes is explored in sequence 3.
4.2.1 Maximizing the Life Cycle Performance In this chapter, the influence of stage order on the performance of design solutions in an MD process is investigated. Therefore, the results of sequence 2 and 3 are compared. As described in chapter 4.1, in the first stage of sequence 2 nine design solutions are generated which makes three for each configuration of two, three and four building. The second stage of this sequence holds six construction types, which makes a total of 54 design solutions. The results from this MD process are visualized in the MD tree 2 (see Appendix A.2). Changing the stage order leads to a different number of design solutions. In sequence 3, the six construction types are content of
83
4 Results the first design stage. The second stage of this sequence provides three design solutions for each of the construction types from the previous stage. This corresponds to one design solution per building configuration. In sum, 18 design solutions are generated for sequence 3 in this case study. The outcomes of sequence 3 can be regarded in the MD tree 3 (see Appendix A.3). Both sequences differ significantly in the computation time they required to generate all design solutions. This can be explained by the number of optimizations they include. Sequence 3 has double the number of optimizations in comparison to sequence 2. However, one optimization process in both sequences takes approximately the same amount of time which is about 5 to 6 hours. The deviation in the optimization times per design solution is caused by the optimization method (EA), not by the sequences. Table 6 displays the evaluation values of sequence 2 and sequence 3, whereas Table 7 gives a more detailed overview of the results in regards to the different building configurations. Comparing the average LCP values of both sequences, it can be noted that sequence 3 delivers a much higher value than sequence 2. Furthermore, the average building distance in sequence 3 is more than a double longer than in sequence 2. This can be regarded in relation to the average solar radiation on the façades and on the cores which accordingly, is higher in sequence 3 than in sequence 2. The building volumes in both sequences are approximate of the same compactness which is expressed by the S/V.
Table 6 Evaluation values of sequence 2 and sequence 3 Average LCP [WBP]
Sequence 2 Min LCP Max LCP
0.724
0.580 0.956
Sequence 3 0.750
0.629 0.972
Distance [m]
4.52
10.27
S/V [m-1]
0.352
0.352
Sol. rad. [kWh/m²]
370.477
399.170
Sol. rad. cores [kWh/m²]
199.060
213.92
84
4 Results Table 7 Evaluation values per building configuration (2,3,4 buildings) of sequence 2 and sequence 3 Average per
Buildings
Sequence 2
Sequence 3
LCP [WBP]
2 3 4
0.785 0.723 0.663
0.819 0.746 0.686
Distance [m]
2 3 4
4.00 4.57 4.99
15.66 8.33 6.78
S/V [m-1]
2 3 4
0.315 0.356 0.384
0.314 0.354 0.385
Sol. rad. [kWh/m²]
2 3 4
393.315 368.330 349.785
423.691 393.026 380.803
Sol. rad. cores [kWh/m²]
2 3 4
191.983 203.094 202.112
246.041 207.241 188.479
building configuration
4.2.2 Fixed Life Cycle Performance In this chapter, the difference between the design solutions of sequence 3 and sequence 4 is described. Based on sequence 3, the influence of a fixed value as the performance goal in comparison to the maximization of the LCP is explored. Therefore, sequence 4 is created. It has the same stage number, stage order and fitness function as sequence 3, but another performance goal (see chapter 4.2.1). Its performance goal which due to the stage order is simultaneously the fitness goal for the optimizations is set to LCP = 0.8. The computation time of 5 to 6 hours per optimization was approximately the same as for sequence 3. The average LCP of 0.737 in sequence 4 is smaller than the average LCP of 0.750 of sequence 3 (see Table 8; Table 9). Furthermore, the average values of the building distances, the solar radiation on the façades and the solar radiation on the cores are higher in sequence 3 than in sequence 4. However, the compactness which is represented by the S/V is lower in sequence 4. In this particular case, it is important to compare individual design solutions in both MD trees (see Appendix A.3; Appendix A.4). For the construction types, which
85
4 Results constantly tend to have LCP values lower than 0.8, the fixed fitness goal does not make a difference in the design solutions. This can be observed for ETICS, brick construction, concrete construction and ventilated façade. In contrast, the building volumes with the double shell masonry construction and the wood construction in sequence 4 are less compact and have lower distances to each other. That induces lower solar radiation on the façades in sequence 4.
Table 8 Evaluation values of sequence 3 and sequence 4 Average LCP [WBP]
Sequence 3 Min LCP Max LCP
0.750
0.629 0.972
Sequence 4 0.737
0.608 0.865
Distance [m]
10.27
6.03
S/V [m-1]
0.352
0.354
Sol. rad. [kWh/m²]
399.170
371.345
Sol. rad. cores [kWh/m²]
213.92
206.164
Table 9 Evaluation values per building configuration (2,3,4 buildings) of sequence 3 and sequence 4 Average per
Buildings
Sequence 3
Sequence 4
LCP [WBP]
2 3 4
0.819 0.746 0.686
0.800 0.728 0.682
Distance [m]
2 3 4
15.66 8.33 6.78
6.01 6.28 5.78
S/V [m-1]
2 3 4
0.314 0.354 0.385
0.319 0.356 0.386
Sol. rad. [kWh/m²]
2 3 4
423.691 393.026 380.803
378.082 361.641 374.311
Sol. rad. cores [kWh/m²]
2 3 4
246.041 207.241 188.479
230.101 191.252 197.137
building configuration
86
4 Results
4.3 Single-stage optimization In this chapter, the necessity of multiple design stages in an MD process is explored. The evaluation values from all design sequences are displayed to give an overview of how these perform in comparison to each other. That should facilitate formulating a conclusion from the various findings of the case study in regards to the research questions. For sequence 3 which contains two stages, 18 design solutions are produced (see chapter 4.2.1). As explained in chapter 3.4, sequence 5 has only one stage. This stage contains optimization of the building volumes and the cores plus the pairing of the outcomes with multiple construction types. In the case study, nine design solutions are generated for this sequence which provides three design solutions per building configuration. The computation time per design solution was approximately the same as for one optimization in sequence 3. The fitness value for a design variant in the optimization process is identified by selecting only the highest LCP of all six LCP values provided by the different construction types. In that way, the LCP can be taken as the fitness function in sequence 5. The highest LCP is constantly provided by the wood construction. Therefore, all LCPs of the design solutions are calculated by using this construction type. All design solutions this sequence are visualized in the MD tree 5 (see Appendix A.5). The average LCP of sequence 5 is 0.907 which is much higher than the average LCP of 0.750 of sequence 3. However, after improving sequence 5 (sequence 5*) in chapter 5.3, its average LCP which includes all resulting LCPs from the different construction types, changed to 0.749. After calculating the new average LCP of sequence 5*, its value is almost identical to sequence 3. In regards to the average building distance, the values of sequence 5* are lower than of sequence 3 (see Table 10; Table 11). The average solar radiation on the faรงades of both sequences is nearly the same, whereas the average solar radiation on the core faรงades is slightly higher in sequence 5*. However, the S/V values are identical in both sequences as Table 11 shows. By regarding these tables, can be noted that the average S/V, building distance, solar radiation on the faรงade and solar radiation on the cores of sequence 1 and sequence 5* are quite similar. The values of these two sequences differ most in regards to the
87
4 Results average LCP. Although the LCP of sequence 5* is higher than the LCP of sequence 1, the highest LCP value of all sequences is delivered by sequence 1. The reasons for these results are discussed in chapter 5.4.
Table 10 Evaluation values of all sequences Average
Sequence 1
LCP
Min LCP
[WBP]
Max LCP
Distance [m] -1
S/V [m ] Sol. rad. [kWh/m²] Sol. rad. cores [kWh/m²]
0.736
0.580 0.986
Sequence 2 0.724
0.580 0.956
Sequence 3 0.750
0.629 0.972
Sequence 4 0.737
0.608 0.865
Sequence 5 0.907
0.845 0.984
Sequence 5* 0.749
0.616 0.984
8.50
4.52
10.27
6.03
8.53
8.53
0.352
0.352
0.352
0.354
0.352
0.352
403.027
370.477
399.170
371.345
396.830
396.830
213.776
199.060
213.92
206.164
225.299
225.299
Table 11 Evaluation values per building configuration Average/ Building
Build.
Sequence 1
Sequence 2
Sequence 3
Sequence 4
Sequence 5
Sequence 5*
LCP [WBP]
2 3 4
0.800 0.727 0.679
0.785 0.723 0.663
0.819 0.746 0.686
0.800 0.728 0.682
0.969 0.900 0.851
0.819 0.742 0.686
Distance
2 3 4
13.43 6.48 5.56
4.00 4.57 4.99
15.66 8.33 6.78
6.01 6.28 5.78
15.29 5.14 5.16
15.29 5.14 5.16
S/V [m-1]
2 3 4
0.313 0.353 0.390
0.315 0.356 0.384
0.314 0.354 0.385
0.319 0.356 0.386
0.314 0.354 0.385
0.314 0.354 0.385
Sol. rad.
2 3 4
423.325 395.886 389.870
393.315 368.330 349.785
423.691 393.026 380.803
378.082 361.641 374.311
421.843 390.938 377.719
421.843 390.938 377.719
2 3 4
208.036 207.617 225.674
191.983 203.094 202.112
246.041 207.241 188.479
230.101 191.252 197.137
260.628 220.960 194.310
260.628 220.960 194.310
config.
[m]
[kWh/m²]
Sol. rad. cores [kWh/m²]
88
5 Discussion
5 Discussion In this chapter, the results of the MD processes are interpreted in regards to the research questions (see introduction). Furthermore, the reasons for the different outcomes of the sequences are discussed.
5.1 Combining stages The comparison of sequence 1 and sequence 2 in chapter 4.1 shows that both deliver different performances and evaluation values. As the LCP is the main focus of the comparison, it is crucial to explore the reasons for the difference in the results. Sequence 1 provides a better average LCP than sequence 2 which does not match with the expectations of the author to create better performing design solutions by combining two design stages in one. In order to investigate the reasons for this discrepancy, it is necessary to take a closer look at the evaluation values. As the average S/V of both sequences is nearly identical, it can be assumed that the solar radiation on the façades is the reason for the difference. The average solar radiation of sequence 1 is significantly higher than that of sequence 2. Regarding these values in conjunction with the building distances, a direct numeric relationship becomes apparent, i.e. the higher the building distance, the higher the solar radiation. By maintaining a high distance, the shading on the façades gets reduced what leads to higher solar radiation values. The reason for such a difference in the building distances are the fitness functions used in both sequences for optimization. In sequence 1, the building volumes and their positions on the site are optimized in the first stage by applying a fitness function which contains the S/V and the solar radiation. Both of these performance criteria are directly related to the aspects which are to be optimized. The S/V influences the formation of building shapes and volumes, whereas the solar radiation aims for a placement of buildings in high distances to minimize shading. This fitness function is based on fundamental knowledge of building forms. Same applies to the second optimization stage of sequence 1 where the average solar radiation is to be maximized by placing building cores in shaded areas. These two fitness functions of sequence 1 are established without further assumptions from the plannerâ&#x20AC;&#x2122;s side. In contrast, the fitness function
89
5 Discussion of sequence 2 additionally incorporates a hypothesis stated by the author (see chapter 3.4). According to the hypothesis, a shading factor is added to the fitness function which involves the S/V and the solar radiation. Since the shading factor is the only difference between the first fitness in sequence 1 and the fitness in sequence 2, it is obvious that this must be the reason for the different average building distances. As the goal value for the average solar radiation on the core façades (shading factor) is 200 kWh/m², the EA aims to find core positions with this particular value. It, therefore, places buildings in smaller distances to each other to increase shading in some areas and then position the cores in these shaded zones. However, the idea of the hypothesis does not match with the outcome because the created shaded areas are too big in relation to the area which can be covered by the cores. This causes even more shading than in sequence 1 and consequently lowers the LCP values. By adding the performance criteria in one fitness function after scaling them to the same numeric range, all of them have a similar value since no further weighting is involved. This means, the authorâ&#x20AC;&#x2122;s assumption is of similar significance as the S/V and the solar radiation to the LCP. After conducting the case study, it can be concluded that this weighting is not appropriate in regards to the overall goal to generate design solutions with maximal LCP. This outcome represents the difficulty that comes with fragmenting a design problem into subproblems. If the assessment method which is used for the final evaluation of the complete design (LCA) cannot be taken for optimization purposes in all stages of the MD process, other fitness functions need to be established. The design solutions are highly dependent on these as the case study has shown. Consequently, it is the planner's task to define appropriate fitness functions. An ideal fitness function should include all performance criteria relevant to the LCP which can be applied to the design variants in the selected design stage. Additionally, appropriate weighting that matches with the weighting in the final design assessment method should be assigned to the performance criteria. In regards to the example of the case study, that implies that the weighting of the three performance criteria in the fitness function of sequence 2 should be adjusted according to the impact each criterion has on the LCP. Since the shading factor was introduced by the author but is not included in the LCP, it should be removed from the fitness function. Despite the higher average LCP values
90
5 Discussion of sequence 2, at this point of the case study, it cannot be determined whether the weighting applied in the first function is appropriate in regards to the LCP. Another example for a problem that can occur when establishing fitness functions is the following: Performance criteria which are relevant to the LCP and can be applied to design variants at the selected stage are not included simply because the planner is either completely unaware of them, or he does not know how these might influence the LCP. Such a case is exemplified in the second fitness function of sequence 1 where the solar radiation is taken as the sole performance criterion for core positioning optimization. A second criterion which is relevant but not included at this stage is the S/V. Since the building volumes are already formed in the first stage of sequence 1, it may seem that the S/V does not change anymore by positioning the building cores. However, in regard to the LCP calculation it is of relevance. As explained in chapter 3.3.2, in the energy demand calculation the volumes of the cores are excluded from the building volumes as they are not part of the heated building volume. Consequently, their position is relevant for the S/V. Two positions can be differentiated, the sides and the corners (see Figure 32). By positioning a core at a corner, just the roof area which is attached to the core is subtracted from the envelope area. If the core is placed at a side, besides subtracting the roof area of the core, two sides of the core are added to the envelope area. That implies a bigger envelope area in comparison to the core positioned at the corner. This difference influences the energy demand, especially when a higher number of cores is involved. If the planner is not highly familiar with the way the LCP is calculated, he might miss out certain criteria in the fitness function as this example demonstrates.
Figure 32 Cores positioned on the sides and in the corners
91
5 Discussion Regarding the described difficulties of establishing fitness functions, three main problems are shown in this first topic of the case study: (1) The shading factor which is based on the authorâ&#x20AC;&#x2122;s hypothesis represents ideas planners may have in regards to design solutions, but do not exactly know how these might affect the LCP. By trying to incorporate such ideas, it is likely that the created fitness function is much more oriented towards the idea than towards the LCP. That can lead to design solutions with worse LCP values as the case study shows. (2) Even when the planner relies on fundamental knowledge to determine performance criteria, the problem of the weighting occurs. In order to assign weighting values, specific knowledge about the influence of different performance criteria on the LCP is required. This knowledge is highly dependent on the design problem and on the set of performance criteria involved. Therefore, high effort is necessary to determine weighting tailored to the LCP. The more components are involved in one design stage, the more complicated it becomes to define performance criteria and especially, their weighting in the fitness function. (3) Furthermore, the planner should have high knowledge of the LCA method in order to incorporate relevant performance criteria and identify appropriate weighting factors. Therefore, the task of formulating fitness functions according to the LCP is difficult to incorporate for planners in practice, mostly due to very limited time for design exploration. Moreover, it can require scientific research to gain such specific knowledge. In regards to the research questions of the first topic (see introduction), it can be concluded that the fragmentation of a design problem into design stages affects the design solutions in multiple ways. Assigning design stages can lead to sequences which require fitness functions formulated by the planner. The problems he is most likely to experience with this task are described above. These problems can lead to worse LCP values as the case study shows. Furthermore, it can be noted that it is important to consider possible relationships between the design components before separating them into different stages. The reasons for that are the characteristics of design problems as explained in chapter 2.1. However, combining multiple design components in one stage where the LCP cannot be taken as the fitness function, can make establishing a fitness function very complicated. Therefore, the approach of
92
5 Discussion separate design stages can be beneficial in terms of formulating fitness functions oriented towards the LCP.
5.2 Changing the stage order In this chapter, the impact of changing the stage order of MD processes on the resulting design solutions is discussed. Additionally, the influence of a fixed LCP value as the fitness goal on the outcomes is described.
5.2.1 Maximizing the Life Cycle Performance The differences between the results of sequence 2 and sequence 3 are discussed in this chapter in order to investigate the impact a change of stage order in an MD process can have on design solutions. Both sequences differ significantly in their LCP values. The reason for that lies in their fitness functions as these determine the optimization processes. Due to the stage order of sequence 2, the optimization processes are conducted in stage one. In stage two, their outcomes are combined with different construction types. The LCP can only be calculated after the last design stage of the MD process. Consequently, in order to calculate the LCP of a design variant from stage one, it needs to be paired with a construction type. However, as there are multiple construction types in stage two, multiple LCP values are delivered for one design variant. In optimization, only one performance per design variant can be considered. Therefore, the planner needs to assign a fitness function which does not contain the LCP, but is highly oriented towards the LCP as explained in chapter 5.1. Formulating such a fitness function is a task of high complexity if multiple performance criteria have to be involved in the function. The two main difficulties in this process are identifying performance criteria which are relevant to the LCP plus can be applied at the selected design stage, and assigning their weighting according to the impact they have on the LCP. Establishing appropriate fitness functions requires specific knowledge of the design assessment method (LCA), the design components and the
93
5 Discussion performance criteria. As the results of sequence 2 show, formulating fitness functions, especially if assumptions of the planner are involved, does not necessarily lead to satisfying design solutions in regards to the performance goal. By changing design stages, the problem of formulating additional fitness functions can be avoided as demonstrated in sequence 3. Stage one of sequence 3 contains the six construction types. For each of them, three designs are optimized. In contrast to sequence 2, here the LCP can directly be used as the sole criterion in the fitness function. That is possible because the optimizations are conducted in the last stage of the sequence. Assigning one construction type in the beginning of the MD process results in one LCP value per design variant which can be used for optimization. The most efficient way of finding design solutions according to a performance goal is to use the same design assessment method for optimization and for the final evaluation of the design solution. That is the reason why sequence 3 delivers better performing design solutions. It can be concluded, that stage order in MD processes can significantly affect design solutions as the case study shows. As fitness functions determine the direction of the optimization process, their formulation is of major importance for the outcome. The establishment of fitness functions is highly dependent on the position of the optimization stage in an MD process. If the optimization stage is followed by at least one further stage with multiple items per branch, a fitness function which does not contain the LCP has to be established. Since that is a very complex task, especially if several performance criteria have to be involved, it is possible that this fitness function does not lead to satisfying design solutions in regards to the overall performance goal of the design. However, if after the optimization stage no further stages with multiple items per branch follow, the LCP which is the assessment criterion for the final design solutions, can also be used as the fitness function for optimization. This is the most effective way of generating design solutions according to the performance goal because no further fitness function has to be formulated by the planner. That ensures, the optimization is directly oriented towards the performance goal and is not manipulated by other fitness functions. Therefore, it is recommended by the author to conduct optimization in the last stage of an MD process.
94
5 Discussion
5.2.2 Fixed Life Cycle Performance In the previous chapter 5.2.1, different fitness functions caused by the stage order were regarded. Their impact on design solutions was exemplified by the comparison of sequence 2 and sequence 3. Based on that, the influence of the performance goal is explored in this chapter. After investigating the impact of fitness functions on design solutions, it necessary to understand how solutions are influenced by the performance goal. Therefore, sequence 4 which is identical to sequence 3 in its stage number, stage order, and fitness function is created. Both sequences differ solely in their performance goal. In sequence 3, the performance goal is the maximization of the LCP, whereas in sequence 4, it is a fixed value of LCP = 0.8. As the results described in chapter 4.2.2 show, some of the design solutions in sequence 4 have significantly worse evaluation values than the corresponding solutions in sequence 3. In contrast, some design solutions do not indicate any change. The reason for this behavior is the fixed LCP value. Each construction type has a specific influence on the LCP (see chapter 4.1). That means, pairing one building geometry with multiple construction types delivers different LCP values. Since the construction types are predefined in this case study, their influence on the LCP cannot be altered. Consequently, in order to achieve an LCP of 0.8, only the building geometries can be varied. Because some combinations of geometries with certain construction types can provide higher LCPs than 0.8, the geometries are adjusted within the optimization processes of sequence 4 to achieve an LCP of 0.8. That causes the observed difference in some of the design solutions between sequence 3 and sequence 4. In particular, the building geometries become less compact, and their distances to each other and to the surrounding buildings decrease leading to less solar radiation on the building faรงades. Additionally, the core positions are adjusted. In contrast, those combinations of geometries with certain construction types that deliver lower LCPs than 0.8 are aiming to achieve the LCP of 0.8 by maximization. That equals to the maximization approach of sequence 3 until the value of 0.8 is reached. Because of that, no major difference between some of the design solutions of both sequences can be identified.
95
5 Discussion These findings indicate, that the performance goal has a significant impact on design solutions. For that reason, it is of importance to consider possible design solutions resulting from different performance goals in order to identify a performance goal that serves the design problem best.
5.3 Single-stage optimization In this chapter, all design stages are combined in sequence 5 which has only one stage. The design solutions are compared to the solutions of multiple sequences in order to investigate the necessity of design stages. As stated in chapter 5.1, optimization can only be conducted if for each design variant one LCP is provided as the fitness value. However, as there are six construction types which are to be combined with the generated geometries, multiple LCPs are provided for one design variant. In order to avoid this problem, the highest of all six LCP values is selected as the fitness value for each design variant. That corresponds to the design goal of finding the best performing design solutions. In this way, the LCP can simultaneously serve as the fitness function which is the most effective approach for finding design solutions according to a performance goal (see chapter 5.2.1). As each construction type has a specific influence on the LCP, an order from the construction type that provides the best LCP to the one with the worst LCP is identified in chapter 4.1. Wood construction delivers constantly the highest LCP and is therefore automatically selected as the fitness value for each design variant in the optimization processes of sequence 5. Consequently, the LCP of all design solutions is calculated for wood construction. Therefore, the average LCP of sequence 5 is much higher in comparison to sequence 3 which, involves all construction types in its average LCP. However, since the LCA algorithm calculates the LCP values simultaneously for all construction types, the other LCPs can be displayed after the optimization as additional design solutions. In consideration of the specific influence of each construction type on the LCP, it can be stated that it does not make any difference for the optimization which one of the provided LCPs is selected. In other words, the generated geometry would be
96
5 Discussion the same regardless of the construction type used for the optimization. This can be explained as follows: The optimization process aims for maximization of the LCP. Therefore, building geometries are varied in order to find optimized design solutions. The fitness value for a design variant serves the purpose of evaluating how the generated design variant performs in comparison to the other design variants in order to select the best variants for the next generation. That means, that the actual numeric value of the fitness is not important, but rather the comparison to other fitness values and to the fitness goal. In the MD processes of the case study, a constant order of the construction types according to the LCP they provide was observed. This can be regarded as an indicator for homogeneous behavior. In this context, it means that none of the construction types are contradicting, but rather improve or get worse simultaneously. The homogeneous behavior of the construction types leads to the finding that any of the LCP values provided by the different construction types can be selected to serve as the fitness value. The choice of the LCP does not affect the optimization process. After an optimization process terminated, all LCPs resulting from the different construction types can be displayed and serve as additional design solutions. Due to these findings, all the other LCP values can be added in sequence 5*. The corresponding MD tree 5* (see Appendix A. 6) visualizes the design solutions for all construction types. It is observed, that the resulting MD tree 5* has the same structure as the MD tree 2 from sequence 2. That means, it is the same sequence and the LCP could have been used in sequence 2 for optimization. Instead, another fitness function was applied that delivered unsatisfactory design solutions. Consequently, it can be declared that by having this knowledge of the influence of construction types on optimization processes before conducting the MD process of sequence 2, the struggle of defining an appropriate fitness function for it could have been avoided. This can be regarded as an explicit example of how detailed knowledge of MD processes and assessment methods (LCA) can improve the solution finding. By regarding the average LCP of sequence 5* and sequence 3, it becomes apparent that both have almost the same value. Moreover, the other evaluation values in both sequences are very similar. In conjunction with the explained findings, it can be noted that applying two stages in sequence 3, where first the construction type is
97
5 Discussion selected and from there the optimization of the building geometries conducted, is not necessary. Since the choice of a construction type does not affect the optimization process, the first stage in sequence 3 can be eliminated. That would result in a sequence identical to sequence 5*. Furthermore, this approach is able to produce much more design solutions in a shorter period of time. Regarding the number of design solutions produced in sequence 3 (18 solutions) in comparison to sequence 5* (54 solutions), it becomes obvious that the latter sequence is much more beneficial for design exploration purposes. Applying the new findings to sequence 1 means, that the LCP could have been used as the fitness function in stage 2. It can be assumed, that the results would have been better because the fitness function that was used for optimization in stage 2, does not include the S/V in addition to the solar radiation. Using the LCP for optimization in this stage would have included this performance criterion in the exact weighting. The comparison between sequence 1 and sequence 5* shows that both deliver almost identical evaluation values in regards to average S/V, building distance and solar radiation on the faรงades. Most, differ both in the average solar radiation on the cores and in the average LCP. These values support the assumption that the missing performance criterion S/V which would influence the position of the building cores is the reason for the difference in the average LCP of the sequences. That implies, that the fitness function in the first stage of sequence 1 is appropriate for optimization oriented towards the LCP. In order to support this assumption, a new sequence 1 with the applied findings should be created. This can be subject to further research. In regards to the findings discussed in this chapter, it can be concluded that having specific knowledge of MD processes and assessment methods is crucial for efficient design solution generation. Questioning certain factors can save computation time and deliver better performing design solutions. The previous assumption that the LCP cannot be used as the fitness function if multiple construction types are contained in the subsequent stage, was proven to be false. This leads to a reconsideration of most previously discussed sequences. The last topic of the research poses the question whether design stages are necessary for solution generation. In an ideal scenario, the complete design would be optimized in one process. However, that is not possible due to the vast variant space
98
5 Discussion which expands exponentially when parameters are added. According to the current state of the art, finding optimized solutions in such spaces is not possible within a reasonable time frame. The more the parameters are added; the bigger the variant space becomes which results in a lower chance of finding optimized design solutions. In sequence 5, only a few building components were optimized at the same time. This already took a very long computation time and by taking a closer look at the design solutions, it can be assumed that there is much more space for improvement. This assumption can be supported by the LCP values of sequence 4 where the goal performance is a fixed LCP of 0.8. In the corresponding MD tree 4 (see Appendix A.4) all LCP values are displayed and it is clear that mostly, these do not have the desired LCP of 0.8, although for some construction types it would be possible. It can be assumed that the optimization time was too short for the exploration of a vast variant space. In regards to design problems that involve more components, it is likely that the optimization would take extremely long and even then, the planner cannot be sure that optimized solutions have been found. This leads to the conclusion, that design stages are helpful in regards to fragmenting the complete variant space in smaller variant spaces to facilitate the search process.
99
6 Conclusions
6 Conclusions To investigate the impact of stage order on design solutions in MD processes, a case study was conducted by the author. In this case study, multiple different sequences were applied for solution generation. The results were displayed in corresponding MD trees to enable a visual comparison of the design outcomes. Additional evaluation values were used to provide a numeric overview of the design solutions in each individual sequence. This workflow enabled the identification of research findings. It is worth mentioning, that all statements concerning the research questions are made solely based on this case study and do not claim general validity. It is possible, that other examples of architectural design problems deliver different results in regards to the posed research questions. In regards to all findings of the case study, it can be concluded that defining an appropriate sequence for the MD process is highly dependent on the individual design problem. The sequence needs to be tailored towards the performance goal while taking into account all components and their relationships. It is important, that the characteristics of design problems are considered, e.g. multi-dimensionality, Many-to-Many-Relation,
to
avoid
neglection
of
compromise
solutions.
Consequently, in an ideal scenario, all components would be optimized in one stage. That may be an effective sequence for small design problems which involve a very small number of parameters. However, that does not apply to design problems with a higher number of parameters. The reason for that is the variant space which expands with each added parameter which decreases the chance of finding satisfying design solutions in a reasonable timeframe. Therefore, it can be stated that compromises have to be made between the number of stages in a sequence and the size of the variant space of each stage. Conducting optimization in the last stage of the sequence is beneficial because at this point the performance goal can be selected as the fitness function. In this way, the optimization is directly oriented towards the performance goal which is most efficient for finding design solutions according to this goal. Additionally, the stage order may induce the need of formulating fitness functions which do not contain the performance goal as a performance criterion, but aim for optimization oriented towards it. Establishing those fitness functions becomes more complex with each criterion since a weighting factor has to be provided for each of them. In order to assign appropriate weighting factors, the
100
6 Conclusions planner needs to have detailed knowledge about MD processes and the assessment method used for evaluating design solutions. Gaining such knowledge can take much time because each design problem requires specific knowledge to the components involved. These can differ significantly from task to task. If the design problem is very complex, case studies may be needed in order to investigate how design components affect each other and what impact they have on the performance goal. The lack of such knowledge can lead to much worse design solutions. However, in practice, deadlines are short and designers cannot afford to undertake research for each design problem. This makes the investigation of fundamental aspects of MD processes such as the stage order an important task to provide basic guidelines which ideally, are applicable to many design problems.
101
7 Outlook
7 Outlook In the course of the case study, findings were identified for each subtopic. These have an impact on the assumptions stated earlier in the process what induced reconsideration of the results from previous sequences. In reference to the theoretical background of design processes (Lawson, 2006), this workflow can be perceived as a return loop. By reevaluating outcomes, findings were identified which benefit the MD process. Following this iterative process, much more knowledge can be gained about MD processes. It can be assumed that choosing other examples for the design problems, components, assessment methods, etc. may lead to different findings. Another approach for further studies can be a higher level of detail in the design that comes with a higher number of components. Furthermore, studies of exploring how the LCP can be influenced by certain building components may be helpful for developing a better understanding of the weighting factor. Further research on MD processes is highly recommended by the author in order to provide the necessary knowledge for a computational application of the MD process. Especially for the implementation in practice, recommendations for assigning design stages and formulating fitness functions would be beneficial to planners who use the MD process for design exploration. It is important to mention that computational tools which help to conduct those studies need to be developed. Ideally, such a tool would run all optimizations without the interference of the planner, e.g. in order to start new optimizations. Otherwise, completing an MD process can easily become a tedious process, even with computational aid. Faster algorithms can shorten the calculation time significantly. Such an algorithm is the LCA tool used in the case study of this thesis. Loops within optimization processes can slow down the whole MD process to a high extent depending on the number of iterations for each design variant. Therefore, those should be avoided if possible.
102
References
References Akin, O. & Sen, R., 1996. Navigation within a structured search space in layout Problems. Environment and Planning B: Planning and Design, 23(4), pp. 421-442. Altavilla, F., Vicari, B., Hensen, J. L. & Filippi, M., 2004. Simulation tools for building energy design. Proceedings Ph.D. symposium Modeling and Simulation for Environmental Engineering, pp. 39-46. Archer, L. B., 1964. Systematic Method for Designers. London: Council for Industrial Design. Bäck, T., Fogel, D. B. & Michalewicz, T., 2000. Evolutionary Computation 1: Basic Algorithms and Operators. New York: Taylor & Francis. Baitz, M. et al., 2012. LCA’s theory and practice: like ebony and ivory living in perfect harmony?. The International Journal of Life Cycle Assessment, 18(1), pp. 5-13. BBSR, 2011. ökobau.dat, Bundesministerium für Umwelt, Naturschutz, Bau und Reaktorsicherheit. Beckstein, C., 2000. Suche. In: G. R. C. &. S. J. Görz, ed. Handbuch der künstlichen Intelligenz. Munich-Vienna: Oldenburg Verlag, pp. 125-151. Bogenstätter, U., 2000. Prediction and Optimization of Life-cycle Costs in Early Design. Building Research & Information, 28(5-6), pp. 376-386. Breuker, J. & Van de Velde, W., 1994. CommonKADS Library for Expertise Modeling. Reuseable Problem Solving Components. Amsterdam: IOS Press. Caldas, L. G. & Norford, L. K., 2002. A design optimization tool based on a genetic algorithm, Cambridge: Elsevier. CEN/TC 350, 2., 2012. CEN/TC 350, 2012b. EN 15804 Sustainability of
construction works. - Environmental product declaration - Core rules for the product category of construction products. Coello, C. A. & Christiansen, A. D., 1999. Moses: a Multiobjective Optimization Tool for Engineering Design Optimization Tool. Engineering Optimization, 31(3), pp. 337-368. Conti, Z. X., Sheperd, P. & Richens, P., 2015. Multi-objective Optimization of Building Geometry for Energy Consumption and View Quality, Vienna: eCAADe. Deb, K., 2001. Multi-Objective Optimization using Evolutionary Algorithms. New York: John Wiley & Sons. DGNB, 2016. DGNB system. [Online] Available at: http://www.dgnb-system.de/en/ [Accessed 7 October 2016].
103
References Dillenburger, B., Braach, M. & Hovestadt, L., 2009. Building design as an
individual compromise between qualities and costs. A general approach for automated building generation under permanent cost and quality control, Zurich: CAADFutures. DIN, 2011. DIN V 18599-2:2011 Energetische Bewertung von Gebäuden -
Berechnung des Nutz-, End- und Primärenergiebedarfs für Heizung, Kühlung, Lüftung, Trinkwasser und Beleuchtung - Teil 2: Nutzenergiebedarf für Heizen und Kühlen von Gebäudezonen. Donath, D., König, R. & Petzold, F., 2012. KREMLAS - Entwicklung einer kreativen
evolutionären Entwurfsmethode für Layoutprobleme in Architektur und Städtebau, Weimar: Verlag der Bauhaus-Universität Weimar. Dörner, D., 1976. Problemlösen als Informationsverarbeitung. Stuttgart: Kohlhammer. Dörner, D., 1980. On the Difficulties People Have in Dealing With Complexity. Simulation & Gaming, 11(1), pp. 87-106. Eastman, C., 1969. Cognitive processes and ill-defined problems: a case study from design. In: D. &. N. L. Walker, ed. Proceedings of the Joint International Conference on Artificial Intelligence. Bedford: The Mitre Corporation, pp. 669-690. Eckert, C., Kelly, I. & Stacey, M., 1999. Interactive Generative Systems for Conceptual Design: An Empirical Perspective. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 13(4), pp. 303-320. EnergyPlus, 2016. EnergyPlus. [Online] Available at: https://energyplus.net/weather [Accessed 6 October 2016]. EU, 2010. DIRECTIVE 2010/31/EU on the energy performance of buildings. [Online] Available at: http://eur-lex.europa.eu/legalcontent/EN/TXT/PDF/?uri=CELEX:32010L0031&from=EN [Accessed 31 October 2016]. Flemming, U., 1990. Knowledge Representation and Acquisition in the LOOS System. Building and Environment, 25(3), pp. 209-219. Gerber, D. J. & Lin, S.-H. E., 2013. Designing in complexity: Simulation,
integration, and multidisciplinary design optimization for architecture. SAGE. Gero, J. S., 1990. Design Prototypes: A Knowledge Representation Schema for Design. AI Magazine, 11(4), pp. 26-36. Gero, J. S. & Kannengießer, U., 2004. The situated function-behaviour-structure framework. Design Studies, 25(4), pp. 373-391.
104
References Goel, V. & Pirolli, P., 1992. The structure of Design Problem Spaces. Cognitive Science, 16(3), pp. 395-429. Goldberg, D. E., 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Boston: Addison-Wesley. Hegger, M., Fuchs, M., Stark, T. & Zeumer, M., 2007. Energie Atlas: Nachhaltige Architektur. Darmstadt: Birkhäuser. Hegger, M. et al., 2012. EcoEasy-Abschlussbericht. Entwicklung einer Methode zur
Bewertung der potentiellen Umweltwirkungen von Gebäuden in frühen Planungspasen, Darmstadt: Technische Universtät Darmstadt. Hegger, M., Söffker, G., Thrift, P. & Seidel, P., 2007. Energie Atlas: Nachhaltige Architektur. Munich: Birkhäuser. Heicke, R., Roesrath, B. & Sauer, H., 2007. Stadtentwicklung Berlin. [Online] Available at: http://www.stadtentwicklung.berlin.de/planen/fnp/pix/erlaeuterungen_fnp/ Contents_and_Conventional_Signs_col.pdf [Accessed 3 October 2016]. Holland, J., 1973. Genetic Algorithm and the Optimal Allocations of Trials. SIAM Journal of Computing, Volume 2(2), pp. 88-105. Hollberg, A., 2016. A parametric method for building design optimization based on Life Cycle Assessment. Dissertation, Weimar: Bauhaus University Weimar. Hollberg, A. et al., 2016. A method for evaluating the environmental life cycle potential of building geometry, Zurich: Sustainable Built Environment (SBE) regional conference. Hollberg, A. & Ruth, J., 2016. LCA in architectural design - a parametric approach. The International Journal of Life Cycle Assessment, 21(7), pp. 943-960. Hopfe, C. J. & Hensen, J., 2009. Experiences testing enhanced building performance simulation prototypes in potential user group. Glasgow, Proceedings of the 11th IBPSA Conference. ISO 14040, 2009. DIN EN ISO 14040:2009 Umweltmanagement – Ökobilanz – Grundsätze und Rahmenbedingungen, Berlin: DIN Deutsches Institut für Normung E.V. ISO, 2009. DIN EN ISO 14040:2009 Umweltmanagement - Ökobilanz Grundsätze und Ramenbedingungen. Johnson, R. et al., 1984. Glazing energy performance and design optimization with daylighting. Energy and Buildings, 6(4), pp. 305-317. Kägi, T. et al., 2015. Session ´Midpoint, endpoint or single score for decisionmaking?` - SETAC Europe 25th Annual Meeting. International Journal Life Cycle Assessment.
105
References Kalay, Y., 1987. Computability of Designs. New York: John Wiley & Sons. Katz, I., 1994. Coping with the Complexity of Design: Avoiding Conflicts and Prioritizing Constraints. In: A. &. E. K. Ram, ed. Proceedings of the 16th annual conference of the Cognitive Science Society. Hillsdale, New Jersey: Lawrence Erlbaum Associates, pp. 485-489. Klöpffer, W. & Grahl, B., 2009. Ökobilanz (LCA): Ein Leitfaden für Ausbildung und Beruf. Weinheim: Wiley-VCH Verlag GmbH & Co. KGaA. Konis, K., Gamas, A. & Kensek, K., 2016. Passive performance and building form: An optimization framework for early-stage design support. Solar Energy, Volume 125, pp. 161-179. Lawson, B., 1994. Design in mind. Oxford: Butterworth Architecture. Lawson, B., 2004. What designers know. Oxford: Elsevier Architectural Press. Lawson, B., 2006. How Designers Think. The Design Process Demystified. Fourth ed. Oxford: Architectural Press. Lichtenheld, T., Hollberg, A. & Klüber, N., 2015. Echtzeitenergieanalyse für den parametrischen Gebäudeentwurf. Kaiserslautern, Technische Universität Kaiserslautern. Luger, G. F., 2001. Künstliche Intelligenz – Strategien zur Lösung komplexer Probleme. Munich: Pearson Studium. Markus, T. A., 1969. Design methods in Architecture. London: Lund Humphries. Matchett, E., 1968. 1968. Control of thought in creative work, 14(4). Maver, T. W., 1970. Emerging Methods in Environmental Design and Planning. Cambridge: MIT Press. McNeel, 2016. Food For Rhino. Add-ons for Grasshopper. [Online] Available at: http://www.food4rhino.com/grasshopper-addons [Accessed 30 October 2016]. McQuain, W. D., 2011. Computer Science at Virginia Tech: Introduction Problem Solving in Computer Science. [Online] Available at: http://courses.cs.vt.edu/cs2104/Fall12/notes/T16_Algorithms.pdf [Accessed 25 October 2016]. Miller, G., 1956. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. The Psychological Review, 63(2), pp. 81-97. Mitchell, W. J., 1975. The theoretical foundation of computer-aided architectural design. Environment and Planning B: Planning and Design. 2(2) ed. Mitchell, W. J., 1977. Computer-Aided Architectural Design. New York: Van Nostrand Reinhold.
106
References Mitchell, W. J., 1990. The logic of architecture – Design, Computation and Cognition. Cambridge: The MIT Press. Newell, A. & Simon, H., 1972. Human Information Processing. New Jersey: Prentice-Hall. Oxman, R., 2009. Performative design: a performance-based model of digital architectural design. Environment and Planning B: Planning and Design, 36(6), pp. 1026-1037. Page, J. K., 1963. Conference on Design Methods. Oxford, Pergamon. Paulson Jr., B., 1976. Designing to reduce Construction Costs. Journal of the Construction Division, 102(4), pp. 587-592. Pohlheim, H., 2000. Evolutionäre Algorithmen. Berlin: Springer-Verlag. Radford, A. D. & Gero, J. S., 1985. Multicriteria optimization in architectural design. In: J. Gero, ed. Design Optimization. New York: Academic Press, pp. 229-258. Radford, A. D. & Gero, J. S., 1988. Design by Optimization in Architecture, Building, and Construction. New York: Van Nostrand Reinhold Company. Rechenberg, I., 1994. Evolutionsstrategie '94. Frommann Holzboog. Rittel, H. & Webber, M., 1973. Dilemmas in a General Theory of Planning. Policy Sciences, 4(2), pp. 155-169. Rittel, H. W. J., 1992. Planen Entwerfen Design. Ausgewählte Schriften zu Theorie und Methodik. Stuttgart: W. Kohlhammer GmbH. Robinson, D. & Stone, A., 2004. Irradiation modeling made simple: the cumulative sky approach and its applications. Eindhoven, Netherlands, The 21st Conference on Passive and Low Energy Architecture. Roudsari, M. S., Pak, M., Smith, A. & Gill, G., 2013. Ladybug: A parametric
Environmental Plugin for Grasshopper to help designers create an environmentally-conscious design. Chambéry, Building Simulation (IBPSA), pp. 3128-3135. Rowe, P. G., 1987. Design Thinking. Cambridge: MIT Press. Schneider, C., 2011. Steuerung der Nachhaltigkeit im Planungs- und Realisierungsprozess von Büro- und Verwaltungsgebäuden. Darmstadt: TU Darmstadt. Schneider, S., 2016. Sichtbarkeitsbasierte Raumerzeugung. Automatisierte
Erzeugung räumlicher Konfigurationen in Architektur und Städtebau auf Basis sichtbarkeitsbasierter Raumpräsentationen. Dissertation, Weimar: Bauhaus University Weimar. Simon, H., 1973. The structure of ill-structured problems. Artificial Intelligence, Volume 4, pp. 181-201.
107
References Simon, H. A., 1996. The sciences of the artificial. 3rd ed. Cambridge: MIT Press. Stachowaik, H., 1973. Allgemeine Modelltheorie. Vienna: Springer. Steinmann, F., 1997. Modellbildung und computergestütztes Modellieren in frühen Phasen des architektonischen Entwurfs. Weimar: Bauhaus-University Weimar. Stover, C. & Weisstein, E. W., 2016. MathWorld- A Wolfram Web Resource. [Online] Available at: http://mathworld.wolfram.com/ParametricEquations.html [Accessed 30 October 2016]. Strube, G., Habel, C., Konienczny, L. & Hemforth, B., 2000. Handbuch der künstlichen Intelligenz. Munich-Vienna: Oldenburg Verlag. Szalay, Z. & Zöld, Z., 2007. What is missing from the concept of the new European Building Directive?. Building and Environment, Volume 42, p. 1761–1769. Tank, W., 1992. Modellierung von Expertise über Konfigurierungsaufgaben. Dissertationen zur künstlichen Intelligenz. Akademische Verlagsgesellschaft AKA. Teresko, J., 1993. Parametric Technology Corp.: Changing the way Products are Designed. Industry Week. UNEP-SBCI, 2009. Buildings and Climate Change Summary for Decision-Makers. Paris: Sustainable Buildings & Climate Initiative. van Dijk, H., Spiekman, M. & de Wilde, P., 2006. A monthly method for calculating energy performance in the context of European building regulations. Nineth International IBPSA Conference Montréal, pp. 255-262. Wang, W., Rivard, H. & Zmeureanu, R., 2006. Floor shape optimization for green building design. Advanced Engineering Informatics, Volume 20, p. 363– 378. Weißenberger, M., Jensch, W. & Lang, W., 2014. The convergence of life cycle assessment and nearly zero-energy buildings: The case of Germany. Energy & Buildings, Volume 76, pp. 551-557. Wittstock, B., Albrecht, S., Colodel, C. & Lindner, J., 2009. Gebäude aus Lebenszyklusperspektive - Ökobilanzen im Bauwesen. Bauphysik, Volume 31, pp. 9-17. Woodbury, R. F., 2010. Elements of Parametric Design. New York: Routledge. Wright, J. A., Brownlee, A. E. I., Mourshed, M. M. & Wang, M., 2013. Multiobjective optimization of cellular fenestration by an evolutionary algorithm. Journal of Building Performance Simulation, Volume 7, pp. 1-19.
108
Appendix
Appendix All LCP values are color-coded by applying a color range from red to green to their branches. The thresholds of this range are determined by the lowest and the highest LCP value amongst all MD trees what facilitates the comparison of all trees. Additionally, the worst (red) and the best (green) LCP values are highlighted for each MD tree individually.
109
Appendix
A. MD trees (visualized with building envelopes)
Appendix A.1. MD tree 1 (visualized with building envelopes)
110
Appendix
Appendix A.2. MD tree 2 (visualized with building envelopes)
111
Appendix
Appendix A.3. MD tree 3 (visualized with building envelopes)
112
Appendix
Appendix A.4. MD tree 4 (visualized with building envelopes)
113
Appendix
Appendix A.5. MD tree 5 (visualized with building envelopes)
114
Appendix
Appendix A. 6 MD tree 5* with all LCPs (visualized with building envelopes)
115
Appendix
B. MD trees (visualized with solar radiation on site)
Appendix B.1. MD tree 2 (visualized with solar radiation on the site)
116
Appendix
Appendix B.2. MD tree 2 (visualized with solar radiation on the site)
117
Appendix
Appendix B.3. MD tree 3 (visualized with solar radiation on the site)
118
Appendix
Appendix B.4. MD tree 4 (visualized with solar radiation on the site)
119
Appendix
Appendix B.5. MD tree 5* (visualized with solar radiation on the site)
120
Appendix
C. Grasshopper Algorithms These Algorithms were used as the GM, the EM and for visualization of the resulting data in the MD trees.
Appendix C. 1 Generation mechanism (GM)
Appendix C. 2 LCA tool by Hollberg (2016)
Appendix C. 3 Automated MD Tree visualization
121
Authorâ&#x20AC;&#x2122;s Declaration
Authorâ&#x20AC;&#x2122;s Declaration I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including the research conducted by me to the presented topic. All references and tools used for this work are listed. This thesis has not been submitted to any other university or examination authority. I confirm, that I told the truth to the best of my knowledge and have not concealed anything.
__________________________ Weimar, 18. November 2016
122