![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/e3f973e509d585b99304946f6c2eaeb2.jpg?width=720&quality=85%2C50)
5 minute read
3. AI transforming the design process
considered a design tool, they now being involved in the decision-making process with the architects. Hence, modifying the role of architects.
3. AI transforming the design process
Advertisement
3.1. Architectural drawing recognition and generation
Fig 2. Workflow of a typical GAN
A successful research and experiment conducted by Weixin Huang (Tsinghua University) and Hao Zheng (University of Pennsylvania), explored the use of Generative Adversarial Network (GAN) recognizing and generating architectural drawings. GAN is a framework in machine learning, specially designed to learn and generate output data with similar or identical characteristics (Huang, W., & Zheng, H. 2018). This showed that Artificial Intelligence can play a significant role in not only repetitive works, but also creative works. This study also deals with the possibility that human design ability would be greatly expanded when combined with artificial intelligence.
The selected case provides a great example by showing how machinic learning can be integrated into architecture design, producing automated architectural plans based on inputs. It showcases how the barrier of technology can be pushed and bring us closer to an automated design process, a step at a time. This study guides me in solving the research question by proving that, the current technology is powerful enough in automating one of the most basic elements of architecture, the design of an architectural plan. The experimenters were able to train a Generative Adversarial Network by assigning color codes to each aspect of an architecture plan, for example the walkway was assigned red color, bedrooms were lite green, windows were dark red, doors dark green and so on. Then they fed several architectural plans to the machine and it was able to produce a colorcoded plan. Then they reversed this process, training the machine to produce architectural plans from color codes. As the system evolved, the generated plans
became more vivid. Studies like these, encourage scientists and designers to push the boundaries of technology and architecture, leading the way for further enhancements in this field and others (Huang, W., & Zheng, H. 2018).
Fig 3. Color codes given to each part of the floor plan
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/93f2b44ccd4a59e6203a0e049dbeb2ae.jpg?width=720&quality=85%2C50)
Fig 4. Color map generated from floor plan
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/e55a8d943d565d47c5fcd1fcd953a7be.jpg?width=720&quality=85%2C50)
3.2.
Fig 5. Floor plan generated from colored map
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/eb25584a44c027123ade1494937913d1.jpg?width=720&quality=85%2C50)
Hence, it is now possible for Artificial Intelligence systems to generate architectural floor plans using color codes. If technology moves one step ahead and the computer is able to generate these color codes automatically, by minimal human input, then we could automate the floor plan generating process.
From plans to architectural element
AI generated plans can be extruded into an entire 3D building by systems used in the gaming industries to create entire cities. This system works by creating tree
maps which divide and subdivides area into different functions. The tree diagram is responsible for mapping out the functions required in a project, this tree diagram is then converted into a tree map. (RauppMusse, FernandoMarson, & Soraia, 2010)
Fig 6. Tree diagram (a) and related tree map (b)
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/7121dff0e7193523cc63d705ac653a87.jpg?width=720&quality=85%2C50)
All the tree maps are gathered along with construction parametric and layout constraints, the information is inserted into a floor plan generator which uses the tree maps as bassline design and generates floor plans. The connections between the rooms and opening are automatically created based on each space, this is achieved by training the system with thousands of floor plans, and the AI system is able to learn from the data provided and utilize it in solving problems. (RauppMusse, FernandoMarson, & Soraia, 2010)
Fig 7. Floor plan generation process
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/6a51f073ae5e02da7566ea2d034253f5.jpg?width=720&quality=85%2C50)
Now the system uses the 2-dimensional generated plan to create walls and opening. The walls are extruded from the plan and given a height set by the designer opening for doors and windows are created based on algorithms. The system is also able to place basic furniture’s based on the type of function a space adheres. (RauppMusse, FernandoMarson, & Soraia, 2010)
3.3.
Fig 8. Walls and opening generated from a 2D floor plan
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/40f909f6ff6786906e3db1fed6064aab.jpg?width=720&quality=85%2C50)
As the system develops, it will be able to accept floor plans generated by other platforms and systems. This will automate several aspects of floor plan generation and 3D modeling using software.
Façade Modeling Using Deep Learning
As the interior spaces are generated using AI, the exterior spaces are created simultaneously. There have been significant advancements in Facade design using complex systems and software Facade recognition and modeling is possible by the means of image sensors and ground control points. Using georeferenced data and morphological image processing techniques, designers are now able to generate computerized 3D models of facades by an RGB image. Multiple overlapping images of the facade are stitched together to produce a large destruction which minimizes deformities in the model. Georeferenced 3D cloud points are generated by line segments and points from the image, image sensors are able to produce depth in the model. Finally, texture is sourced from the image data and the model is given materiality. (Bacharidis, Ragia, & Lemonia, 2020)
Fig 9. 3D building model and façade reconstruction, by combining image and georeferenced point data.
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/9810ae9526eb1a8597ff4a7c1fcdebe8.jpg?width=720&quality=85%2C50)
A more advanced approach in façade modeling is catered by the use of GAN (generative adversarial network). It is a branch of machine learning which deals with pixel-to-pixel data (Pix2Pix). GAN can be easily trained to map out data from a 2D image source. The system works by producing a 3D depth point could which is overlapped on the data produced by the Pix2Pix networks. The position of points in the 3D point clouds are refined by georeferenced data. The data is able to iteratively correct point positions in space. A surface reconstruction algorithm and texture mapping technique creates a realistic 3D model of the façade. (Bacharidis, Ragia, & Lemonia, 2020)
3.4.
Fig 10. Updated workflow for generating a 3D building model and utilizing AI based technology and georeferenced data
![](https://assets.isu.pub/document-structure/201214101332-a1f659a8cb1d17d62d6080fea84d87d9/v1/066ac463f1d25ec7832192ef58aa794b.jpg?width=720&quality=85%2C50)
Automated design variations
AI is also aiding designers by creating design variations from standard designs. This process works by providing several images and details of an architectural element. The neural network is then trained on that design and the training is