Parametric Algorithmic Genetic Computational Digital Design: a discussion about tools. Ben Regnier, M.DesS 2011
“To our minds architecture and physical planning lack adequate theoretical foundations. It is no longer enough to rely on intuitive skill acquired through personal experience... skill must be come scientific.” This call to arms, emitted by Lionel March, Marcial Echenique, and Peter Dickens in the May 1971 issue of Architectural Design, was the result of what had already been more than a decade’s research on the incorporation of computational methods into design. Working at Cambridge University, they sought a rigorous and scientific basis for the architectural project, using such concepts as graph and game theory. Forty years later, the goal of a computationally engaged architecture is certainly a reality, although the structuralist methodology above has been usurped by one more interested in either the production of novel form or greater control over the production of ever more complex buildings. What has also been lost is a critical evaluation of the milieu of computational or digital design as a whole. As the use of computers is now nearly ubiquitous, the ways they are used and the extension of the design product to include digital tools and methods has escaped critical attention. Architects not only constantly “re-invent the wheel” as digital practitioners, but are somewhat naive to the ecology of intellectual property and open source code sharing that has
developed in the software programming community, that might also serve as a guidebook for the future of the digital project. Design practitioners that work in a wholly (or mostly) digital world, such as graphic designers, photographers, web designers, etc, are already comfortable with the idea of workflow as a product, and have increasingly sophisticated ways of sharing or selling their methods as a way of augmenting a practice. Alternately, consulting firms have highly developed methods of maintaining, protection, and sharing data internally, that would also benefit an architectural practice. The following is a series of questions about digital practice that serves as a roadmap to how architects might reexamine their methods and output. The questions were asked to a wide array of practitioners and academics, and some extrapolation has been provided to suggest ways that a practice might change to take advantage of their current situation.
Interviewees Benjamin Ball, Ball-Nogues: “Ball Nogues is a creative enterprise working in design and fabrication led by Benjamin Ball and Gaston Nogues. The Studio produces architecture, art and industrial objects.” www.ball-nogues.com
Daniel Davis: Daniel is currently completing a PhD at the Spatial Information Architecture Laboratory (SIAL) at the Royal Melbourne Institute of Technology. www.nzarchitecture.com
Brian DeYoung, SOM: Brian is the current BIM Manager at the Chicago office of Skidmore Owings & Merril, where he oversees the implementation of Building Information Modeling (BIM) tools in project teams. www.som.com
Panagiotis Michelatos: Panagiotis is a lecturer in computational design at Harvard University. He is currently working as a computational design researcher for the London based structural engineering firm AKT. Along with colleague Sawako Kaijima he has also developed a range of software applications for the intuitive and creative use of structural engineering, visualization, and other optimization methods in design. sawapan.eu
Steve Sanderson, CASE: “CASE is a Building Information Modeling (BIM) and integrated practice consultancy based in New York City. We help building design professionals, contractors and owners identify, implement and manage the technologies and business practices that enable more effective coordination, communication and collaboration.” www.case-inc.com
Issue 1: Standards
To what degree or in what ways can computational strategies be cataloged or standardized? Benjamin Ball “Each project is so different, the work flows reflect this. [However,] we do have standard works flows for renderings and portfolios.” Daniel Davis “I don’t think strategies can be generalized, because that strategy is too closely related to the project - almost always a unique strategy will be selected for each project. .” Brian DeYoung “We do not have any overall standardized workflow... However, we do have some [custom tools] that are frequently used and [standardize] certain aspects of the workflow.” Panagiotis Michelatos “In my opinion engagement and promotion is more important than standardization for the adoption of new approaches as there is significant resistance from people within a corporate environment to use new techniques.” Steve Sanderson “A smart consultant is far more flexible/ adaptive than any tool/technique that we could develop… architects have a tendency to undervalue people.”
One of the primary myths surrounding digital practice the idea of a “magic button” that will automate complex design tasks. The reality is that, while digital methods allow for greater control over complex design problems, and open up new possibilities for architectural experimentation, frequently the strategies, if not the tools themselves, cannot be easily replicated from project to project. The degree to which digital practice can be standardized is in the development and cataloguing of components that might be recombined within a larger design project. There are certainly different degrees of standardization as can be expected, smaller “thought leader” firms rarely have standard methods, while larger offices generally have “tracks” that projects can follow, and have additional overhead to spend on custom tools and the training and manuals necessary to keep those tools usable. Many successful methods of standardizing have been borrowed from the world of software programming - libraries of functions and objects, used with a versioning system and collective documentation. However, these methods are still unique for each office that incorporates them - there are few shared libraries for architects.
Issue 2: Discovery & Sharing
How do you discover new methods? How do you share your knowledge with others? Benjamin Ball “New procedures are constantly in development ... we attempt to demonstrate the innovation to generate interest from the community by hinting at its details , but we don’t do a step by step anymore ( we used to).” Daniel Davis “Open source repositories such as open processing and even wikipedia... adapting [computer programming texts] for architecture... I share my work on my blog, I post scripts in forums and code exchanges, I created a website (parametricmodel.com) to help people share code, I tutor and run workshops where I share my design methods.” Brian DeYoung “Our specialized tools are installed automatically on each computer... we have [experts] in the office who [develop] scripts for teams.” Panagiotis Michelatos “in order to share methods with the wider office as well as with clients and other consultants it was necessary to devise interfaces and encapsulate new functionality into easy to use tools... [this still] always [requires] significant effort from our side in training and support.” Steve Sanderson “We try to expose ourselves to as many platforms/projects as possible and give formal presentations of projects internally that focus a lot on technique/ approach. For larger clients we have recommended internal communities and wikis.”
In a world of internet forums, Google and Wikipedia, discovery of new information is often far easier than sharing what you have done. Design technology communities are fairly open and verbose, and most of the deeper ideas used in architectural computation have their roots in mathematical concepts that have been around for decades and have been documented extensively. Sharing, even within a small office, proves to be a far more difficult problem. People often need help overcoming the friction involved in learning new methods. Introducing a complicated new tool for standard use in an office can require a lot of set up and training, often to the point that it is easier to train a small group of people initially and let the information disseminate over time. Sharing with outside parties has its own difficulties. Giving away code or tools can suggest that the person sharing is also willing to give up large amounts of their time as a teacher and technical support consultant. Specific walkthroughs for a method of use can often lead to those exact steps being replicated, instead of a more intuitive and creative use of the tool. Often outside sharing takes place on dedicated forums and within skilled communities, which helps to distribute the costs of sharing among all of the users in the community.
Issue 3: The Right Place for Computation Should a design start with a computational concept in mind? Benjamin Ball “Almost nothing gets designed independent of the “design of its production.” The constraints of production are always established prior to the development of specific form.” Daniel Davis “Because it is difficult to communicate a problem in its entirety to a computer... my realistic position is that you use computation like a consultant [and] refer to its opinion on tricky problems you do not know the answer to.” Brian DeYoung “The key is always to not let the software drive a project, but let the design drive the project. Therefore software, especially in the beginning of a project, is not a consideration.” Panagiotis Michelatos “Computational methods are injected into different stages of design as partial solutions to partial problems. I still don’t see the possibility or even the usefulness for an all integrated computational approach.” Steve Sanderson “the ideal scenario is that there is an understanding of geometric, material and assembly constraints that are considered as part of the design. Minimizing the amount of post-rationalization on any project is ideal.”
Another myth to be confronted involves the idea that computational design necessarily suggests using computation to drive the form and the meaning of the design. In reality, even a brief survey of practitioners reveals that there are many ways that computational design and fabrication methods can be incorporated. For smaller projects and installations, particularly those interwoven with a method of fabrication, starting with a computational concept is vital. However, larger architectural projects can start with concepts that have nothing to do with computation, and even when fabrication or CD have been incorporated into the initial concept, the final design product has many other criteria that are difficult or inefficient to incorporate into a digital model. The most flexible and realistic method to incorporate computation in architecture is to relegate it to specific tasks with easily definable inputs and outputs. However, particularly when fabrication is key, it can be very helpful (if not vital) to have a basic knowledge of geometric or material concepts in the initial design process.
Issue 4: Intellectual Property
How and when should architects share or protect their digital work? Benjamin Ball “The challenge with IP is enforcement. By the time you pay an attorney to write a patent somebody could have ripped you off and monetized your invention... So, its a very tricky issue that must be considered on a case by case basis. ” Daniel Davis “I see it as mutually beneficial to share intellectual property tied up in computational methods... an architects competitive advantage shouldn’t be their ability to create tools, it should be their ability to design.” Brian DeYoung “[SOM presents] finished projects rather than methods and how a project was developed. SOM does tend to see things like scripts and API tools as intellectual property and protects them as well as the design objects themselves.” Panagiotis Michelatos “I would rather keep open source limited to foundation projects like operating systems, browser or generic utility libraries for another reason. [Open source] makes it impossible for young independent developers to make any money out of their work.” Steve Sanderson “it really comes down to a strategic decision from the person that authored the tool… [architectural scripts and plugins] are normally so one-off or easy to reproduce that they cannot be expanded to a broader community of users.”
Intellectual Property proves to be one of the more contentious issues in the world of computation in architecture. Often digital output is not cost-effective to protect as it does not really make a convincing value proposition to enough potential users to make it worth the overhead to protect and commodify it, ether due to the specificity of its use or the complexity of making and providing support documented and stable software. On the other hand, sharing your work through standard open source methods can open the design result to simple copying by naive users, and can also lead to issues with providing free support to potential users on something that has been given away. The result of this tension is that a lot of digital tools are saved and forgotten, leading to an incredible amount of replication of work between practitioners due to a lack of communication. While the decision to share or protect digital property is ultimately that of its creator, the sharing of general concepts, mathematical algorithms, geometric knowledge and pseudocode is an easy and helpful way to increase the general body of public knowledge and also gain status as a knowledgeable practitioner in a relatively small field. In other words, mentoring is almost always a good idea.
Interview: Benjamin Ball Does your office have a standardized workflow, or repeatedly used/ adapted scripts or plugins, to produce your designs, documentation, or presentations? No, we do not. We do have software design processes (plugins etc) that we continue to adapt and evolve but there is not an idea of standardization. Each project is so different, the work flows reflect this. We do have standard works flows for renderings and portfolios How are the methods above shared within your office? How are new ones developed or discovered? The first part I’ve already answered. New procedures are constantly in development, We “design the production” of a project prior to designing a form. I don’t know “how” they get developed. That is like asking a guitarist “how” he plays guitar or asking a writer “how” he writes. Its is a creative process . . . . I can say that our work always starts with a hypothesis about how to integrate concepts, computation and fabrication. All tools are produced with this integration in mind. Occasionally we will develop a script to help us with visualizations for particular project prior to precisely knowing how the project gets produced. However, we don’t waste a lot of time on these scripts, they are provisional. Do projects at your office tend to start with an idea of how they will be implemented or generated in mind, or is the form or design independent, and methods figured out after the fact? Implementation is the project. We almost never imagine a form independent of it. Almost nothing gets designed independent of the “design of its production.” The constraints of production are always established prior to the development of specific form.
Does your office share your methods or tools with the rest of the architectural community? In what ways (blog, studios, lectures, etc). What is your opinion about intellectual property as it pertains to scripts or other tools rather than the design object itself? We share a little but we don’t give away the meat of our ideas. There is a limit to how much we will share. We attempt to demonstrate the innovation to generate interest from the community by hinting at its details, but we don’t do a step by step anymore ( we used to). It is all fun and games until Gensler appropriates your idea, monetizes it, and doesn’t acknowledge you (this happens a lot). We give enough away to be credited with an idea, but we hold on to a lot of trade secrets. I think that scripts, tools and software are generally a more clear form of intellectual property than “designs”. This notion is institutionalized by patent law. A design patent is practically useless, while a utility patent can be quite effective. The challenge with IP is enforcement. By the time you pay an attorney to write a patent somebody could have ripped you off and monetized your invention. Then, you have to pay a bunch of money to sue them. By the time you get out of court, the invention might be old technology or useless and your competitor has already made his money. So, its a very tricky issue that must be considered on a case by case basis. There is not general rule to abide by.
Interview: Daniel Davis To what degree or in what ways do you think that computational strategies can be cataloged or standardized to be used between projects or shared among different firms/practictioners?
of the project, guided by the architect. The antithesis of this is someone like Gehry who starts with a form, and then computationally finds a way to manufacture the form.
I don’t think strategies can be generalised, because that strategy is too closely related to the project - almost always a unique strategy will be selected for each project. You could ask the same question of computer science, and the answer would be that even on very similar projects (like creating two different blogs), individual components might be shared (such as a database connection library) and the architecture might be shared (Model-view-controller), but it is unlikely the actual strategy for putting the site together is the same. So I think you will get people sharing and refining individual components, and, if projects get complicated enough, architecture for linking these components, but the strategy for approaching a problem (and the solution) will always belong to the individual firm/practitioner.
However from my limited experience working in practice I understand this is an extreme and unworkable position. Primarily because it is difficult to communicate a problem in its entirety to a computer (some parts of the design space can not be explored through computation), and because for many decisions it is easier/faster/better to rely on the designer’s intuition. So my realistic position is that you use computation like a consultant, you refer to it’s opinion on tricky problems you do not know the answer to.
What are the common ways that you discover / develop new computational strategies? - Reading about how people created specific projects (Eg. Chris Williams writing about the British Museum Roof). - Reading books/websites on computer programming and adapting this for architecture. - Opensource repositories such as open processing and even wikipedia has some good sample algorithms. Do you think it is preferable to start designing with an idea of how a form might be generated or fabricated, or to instead adapt a computational strategy to a design developed through other criteria? Ideally how&when would you see computational methods integrated into a design project? I think you should start with an idea of how to generate and fabricate an idea. In my most idealist moment, I would say that computation is a central tool to working this out and should be engaged from the very beginning, with computation helping the design emerge from the constraints
How and in what ways -blog, studios, lectures, etc- have you previously (as a sole practitioner and as a part of larger firms) shared your methods or tools with the rest of the architectural community? What is your opinion about intellectual property as it pertains to scripts or other tools rather than the design object itself? I share my work on my blog, I post scripts in forums and code exchanges, I created a website (parametricmodel.com) to help people share code, I tutor and run workshops where I share my design methods. In general, I see it as mutually beneficial to share intellectual property tied up in computational methods. I understand that some people may not share for commercial reasons, but I come from a generation that sees such a stance as immoral - an architects competitive advantage shouldn’t be their ability to create tools, it should be their ability to design. I think a lot of people who do not understand the process are scared someone will just rebuild their project on a different site, however people are not really interested in the full project, they are interested in the little tools and workflows that make up the project. To me the biggest barrier to sharing is not the loss of intellectual property, but the time it takes to generalise an idea so that it is useful to another designer.
Interview: Brian DeYoung Does your office have a standardized workflow, or repeatedly used/ adapted scripts, plugins, or other tools, to produce your designs, documentation, or presentations?
Do projects at your office tend to start with an idea of how they will be implemented or generated in mind, or is the form or design intent independent, and methods figured out after the fact?
We do not have any overall standardized workflow. Each project takes a different path and may use different tools and software. However, we do have some API tools for Revit and scripts for AutoCAD that are frequently used and standardizes certain aspects of the workflow. In AutoCAD this has been formulated into SOM toolbars that make these tools available for all users. In Revit we are just beginning to establish tools like this. Eventually, we hope to have specialized SOM tools that will be available in Revit just like we currently have in AutoCAD.
The design and form comes first and is primary. There is no consideration of how the design will be produced or what software will be used in the beginning. The individual teams are free to use whatever they feel comfortable. The key is always to not let the software drive a project, but let the design drive the project. Therefore software, especially in the beginning of a project, is not a consideration. However, in our effort to convert the office to Revit, there is now some advanced planning/preparation for projects that are destined for Revit.
How are the methods above shared within your office? How are new ones developed or discovered?
Does your office share your methods or tools with the rest of the architectural community? In what ways (blog, studios, lectures, etc). What is your opinion about intellectual property as it pertains to scripts or other tools rather than the design object itself?
Our specialized tools are installed automatically on each computer in the office when Revit and/or AutoCAD is installed. For AutoCAD we have an AutoLISP expert in the office who develops scripts for teams. On the Revit side we have a Revit Task Force in the office that is leading the effort to expand the use of Revit in the office, establish standards, generate API tools, etc. When the Task Force develops an API tool it is then distributed to all Revit users and automatically installed with new installations of the program.
SOM does not typically share specific methods or tools. There are people in the office that lecture and SOM is on various social networks. There are also SOM employees that teach in local universities. What is discussed in these venues is very general and more of a presentation of finished projects rather than methods and how a project was developed. SOM does tend to see things like scripts and API tools as intellectual property and protects them as well as the design objects themselves.
Interview: Panagiotis Michelatos To what degree or in what ways do you think that computational strategies can be catalogued or standardized to be used between projects or shared among different firms? Our approach to the development of computational solutions has been to always try to enrich our centralized library of functions and classes. Technically this is done by using a versioning system and organizing classes into libraries that you can include in future projects. This is the way solutions are shared between the few people that can program. However in order to share methods with the wider office as well as with clients and other consultants it was necessary to devise interfaces and encapsulate new functionality into easy to use tools. There we standardized the interface but it would always require significant effort from our side in training and support. Otherwise most people would not engage with unfamiliar tools and techniques in a professional environment. In my opinion engagement and promotion is more important than standardization for the adoption of new approaches as there is significant resistance from people within a corporate environment to use new techniques. Regarding sharing between firms: At the moment this is still difficult. Even if we assume that firms are willing to share their research in the first place with their potential competitors. There are still significant interoperability issues between the different software used by architects and consultants that make even conventional file exchange difficult. Also a lot of the code written is project specific which makes sharing unnecessary, or is written on a specific platform, or using APIs related to the software that each company is using so even if it is written in the same language it is not usable unless you are also using the same software. For simple programs it would be easy to share knowledge using pseudo code or in any case port them from one language to another. But for more complex programs the amount of effort that is needed in order to actually make the code shareable in a useful way [good and explanatory comments, documentation, thorough debugging] makes sharing difficult. Having said that, there is a significant knowledge and even code sharing
taking place but not as an official strategy of any firm. It rather happens in an informal manner between the computational groups of different architecture and consultant firms. As the pool of people working in the field is rather small, people know personally each other and that makes sharing in a friendly basis easy. In addition the same people that work in computational groups are the people who publish papers in the relevant conferences, maintain online tutorials, or contribute to online forums and code sample repositories. The reason why this informal sharing is possible has to do primarily with the fact that firms in the construction industry do not in general operate like software companies in terms of copyright and intellectual property. When it comes to computational techniques the management and directors of these firms have a loose approach either out of indifference [software is not their source of income] or ignorance [the generation gaps in terms of computational knowledge]. So in conclusion there are no official strategies but rather a decentralized network of informal sharing supported by the web. What are the common ways that you discover / develop new computational strategies? In a corporate environment research does not happen in vacuum. Instead you must look out for opportunities in the real projects that come through. Everything starts by recognizing some partial design problem usually in a specific project and start building computational solutions and interfaces for it. However we always try to implement new functionality in ways that makes as much of it as possible reusable. So we prefer to generalize the specific problem to the point that it will enrich our library with new classes and functions. We usually try to abstract the specific design problem into numerical models with sufficient degrees of freedom and extensibility. This
Interview: Panagiotis Michelatos (continued) is because in the early stages of design we have to foresee the radical changes that take place and for which parametric models are in general inadequate. There are also persistent problems of automation and software interoperability that you tackle over time and these are the ones that have a direct and measurable impact on productivity. Although a lot of these developments are not interesting from a research point of view they are the most visible and often the most appreciated. So seeking out opportunities to lubricate the system and solving often trivial but tedious problems enables all the other more interesting research to take place. Do you think it is preferable to start designing with an idea of how a form might be generated or fabricated, or to instead adapt a computational strategy to a design developed through other criteria? Ideally, how & when would you see computational methods integrated into a design project? Computational methods are injected into different stages of design as partial solutions to partial problems. I still don’t see the possibility or even the usefulness for an all integrated computational approach. Having said that, the employment of computational method is contingent. Unlike “free form” modelling which is anything but free, however computational methods can be made “form free” in that they can address design in different levels of abstraction and resolution. So the use of computational methods is invariant or indifferent to whether the driving force is form, fabrication, environmental factors or conceptualization. How and in what ways -blog, studios, lectures, etc- have you previously (as a sole practitioner and as a part of larger firms) shared your methods or tools with the rest of the architectural community? What is your opinion about intellectual property as it pertains to scripts or other tools rather than the design object itself? We have shared as freeware standalone executables that we have devel-
oped and extensively made use of conferences and publications. In addition we have often offered consultancy and code samples in an informal basis to people we have met through conferences or happen to work on the same project from different firms or to people who contacted us via our web site. However sharing is not without problems and responsibilities. Just putting something on line does not constitute sharing, you need to invest precious time in writing documentation and developing interfaces as well as engaging with users. I am not particularly in favour of open source when it comes to specialized software and that is because of unpleasant experiences of people in the field. It is fine to share code snippets and provide advice in an informal level with people that will go the extra length to actually try to understand what you are doing and how. However as soon as you put something in the public domain you often get contacted by very impolite people that want you to make things for them in the name of one way “sharing”. I would rather keep open source limited to foundation projects like operating systems, browser or generic utility libraries for another reason. It makes it impossible for young independent developers to make any money out of their work and in the end it only benefits the larger corporations [either software companies or construction firms]. Another thing that is problematic in terms of intellectual property is the attitude when it comes to digital tools of not having to give credit to the authors. For example there are extensive guidelines and strict rules in the academic community for referencing authors of written material. However when it comes to software people think it is ok to not mention sources and authors. Anecdotally we had the experience of people basing whole workshops or seminars around our software decorating their website and papers all over with images that are clearly captured from our tools and still failing to mention even once the authors.
Interview: Panagiotis Michelatos (continued) Regarding intellectual property of computational techniques in the construction industry, there are no clear rules. Due to the aforementioned generation and knowledge gap between management and people in computational groups it is difficult to even discuss such issues as traditionally there was only limited reusability and reproducibility of designs that are in general heavily conditioned by their context. The fact that we can now abstract and encapsulate methods in code and software and make them transferable opens up these questions. For example contracts in the construction industry often have clauses that make everything you produce the absolute intellectual property of the company without adequate compensation for the employee. This is fine for traditional drawings but problematic for software. Finally there is of course the question of authorship. Software tools do condition and influence the final outcomes of design sometimes in very obvious ways [rhino architecture, flash animations] because their interfaces guide the designer’s perception and intuition. Sometimes it is even more pronounced as when a script that has a particular formal expression goes viral and applied to different projects by different firms. This is often the case in competition phases when people don’t have much time developing particular solutions. Despite very obvious influence in both the form and the logic of the outcome designers are in denial about the shared authorship of results. I prefer the way Hannah Arendt approached the problem of Homo Faber in “the human condition” where she described the artefact that homo faber constructs as an endless chain of means and ends.
Interview: Steve Sanderson Does your office have standardized workflows, or repeatedly used/adapted scripts, plugins, or other tools, or are strategies developed independently for each client? We try to standardize to the extent possible, but it is quite difficult to reuse anything directly. We’ll often have bits and pieces of content that we’ve developed find an application on other projects, but generally each project/situation is unique enough to require a new approach. More than specific tools/techniques we find the experience of individuals in our office to be far more valuable… a smart consultant is far more flexible/adaptive than any tool/technique that we could develop… architects have a tendency to undervalue people. How are the methods above shared within your office? How are new ones developed or discovered? We have a small office (7 people total), so the issue of knowledge sharing is pretty minimal for us. Informally we use internal emails to send out links to new tools/techniques that we find. We are active in design technology communities, particularly through DesignReform, and are also on the lookout for new approaches by participating in early release programs. We try to expose ourselves to as many platforms/projects as possible and give formal presentations of projects internally that focus a lot on technique/approach. For larger clients we have recommended internal communities and wikis, such as MindTouch, to try and achieve similar effects. How often does CASE get the chance to contribute in the early stages to a design to help optimize it for digital modeling and fabrication? Do you feel that parametric methods are best brought to a design after the fact, or that forms, when possible, should be sought with a strategy of how they might be generated digitally already in mind? Not as often as we would like… architects also tend not to want to bring in external help until things are bad, which is usually too late to have a significant impact on the way something was originated. A parametric
approach does not necessarily mean that something is more optimized for modeling and fabrication… the ideal scenario is that there is an understanding of geometric, material and assembly constraints that are considered as part of the design. Minimizing the amount of post-rationalization on any project is ideal. Does your office share your methods or tools with the rest of the architectural community? In what ways (blog, studios, lectures, etc). What is your opinion about intellectual property as it pertains to scripts or other tools rather than the design object itself? DesignReform was expressly created with this intent in mind. We feel that several prominent architects have built careers through the exploitation of software techniques (Maya comes to mind). We are interested in democratizing these techniques so that the critical discussion regarding the role of computation in design can shift from how to why. Aside from DesignReform (a blog), we have also recently started a technology challenge site called DesignByMany: http://designbymany.com/ This is meant to provide a venue to expand this beyond our internal studies and provide a place for others to contribute to this mission. We also give lectures when invited and several of us teach seminars focused on design technology at local schools. Regarding intellectual property, it really comes down to a strategic decision from the person that authored the tool… Does access to this tool give the author a distinct competitive advantage that is difficult/costly to replicate? If not, does it provide something of value to others? If so, is it valuable enough that others would pay for it? If so, would it be more valuable if it is opened up to a broader development community? Generally the tools/scripts produced internally by architects don’t pass the first two questions, because they are normally so one-off or easy to reproduce that they cannot be expanded to a broader community of users. Some examples of this would the custom tools created by Foster’s SMG (not made external for competitive advantage reasons) vs Evolute tools (generally useful and for sale, although with a very small audience) vs the numerous tools built for Grasshopper by end-users (generally free, but also limited application).