Varjo reconstructs reality, Atvero targets email management, Sentio simplifies VR presentations, and Chaos connects Corona to Vantage, plus lots more
AI in AEC news 14
Real-time viz tools get generative AI boost, computational AI used to optimise building layouts, neural networks provide insight into 2D drawings, plus lots more
AI comes to architecture 15
We talked with Bill Allen, CEO of EvolveLab, on his latest AI tools, including one for BIM-based AI image generation and an AI assistant to automate drawings
Generative AI for urban simulation 16
How Urbanly integrated Large Language Models into its CityCompass platform to enhance household location models
Connecting architecture to fabrication 18
How three firms — SHoP Architects, WSP and Bouygues — have bridged one of the AEC industry’s biggest divides
7 things we learnt at NXT BLD / DEV 24
We reflect on some of the key themes to come out of our London events this year
Autodesk’s granular data strategy 33
The AEC world is still file based, but the future is databases in the cloud. We look at how Autodesk is addressing this shift
Building bold at Maggie’s cancer centre 36
With its complex geometry, this iconic hospital building demanded an integrated approach for construction
SUBSCRIPTIONS
Watch NXT BLD and NXT DEV on-demand
Varjo Teleport for reality reconstruction powered by 3D gaussian splatting
R/XR specialist Varjo has unveiled Teleport, a new service designed to transform how users create and interact with 3D environments for a wide range of spatial computing applications.
The technology preview highlights the service’s capability to quickly generate photorealistic 3D capture scans of realworld environments directly from an iPhone Pro / Pro Max and allows users to view these scenes from a variety of devices, including PCs, VR headsets, and more.
The idea behind Teleport is that anybody can create a high-resolution 3D model of their environment without needing skills in real-time 3D graphics or photogrammetry.
Teleport reconstructs the real-world scene with accurate lighting, shading, textures and reflections using what Varjo describes as breakthrough advancements in 3D gaussian splatting and machine learning technologies.
The resulting 3D reality capture can be viewed and experienced with a range of devices, starting with Varjo headsets, other PC-connected OpenXR headsets, or Windows desktops.
Varjo’s reality reconstruction technology is designed to enable users to virtually visit and interact with remote locations in ‘great detail’. Varjo names training, mission planning, and remote assistance as potential application areas.
The Finnish National Opera and Ballet
will be using Teleport in its operations, as Hannu Järvensivu, XR Stage Project Manager explains. “Together with our XR Stage modelling tool, we expect it can improve the evaluation of new incoming rental productions significantly, as the digital twins of real-world sets can be investigated on the virtual stage in their authentic size and form, instead of trying to figure out their visual appearance and fit on stage only based on photos and CAD images.
Varjo doesn’t specifically mention construction as an application area, but
we expect there will be some use cases for capturing as-built conditions or issue resolution.
According to Varjo, it has tested captures from 5m2 to 1,000m2. A ‘large room’ (5m x 5m) would take a few minutes to capture and need about 500 photos. Larger spaces can be captured up to a limit of 2,000 photos.
Varjo is inviting users interested in trying out the technology to join a waitlist. The service is expected to become generally available later in 2024.
■ www.varjo.com/teleport
Autodesk embraces granular data with AEC data model API
Autodesk has launched its AEC Data Model API, making granular data, the underlying data that makes up a model or a file at the most elemental level, accessible to Autodesk Docs users.
“The Autodesk Data Model API made it possible for us to extract data from our Revit models and centralise it on the cloud so it’s accessible within our organisation,” said Josha van Reij of Arcadis, an early adopter. “We’ve been able to reach 60% more structured project data using the API compared to our previous way of working.”
See page 33 for more on this.
■ www.tinyurl.com/Data-model-API
Real-time insights for sustainable architectural design.
Enscape 4.1 sets new benchmarks in architectural software by merging aesthetic capabilities with practical, performance-oriented features and a brand-new add-on. The latest version empowers users to create visually stunning designs while ensuring they are viable, energy efficient, and more sustainable. Enscape 4.1 includes:
Enscape
Windows is available for: Revit, SketchUp, Rhinoceros, Archicad, and Vectorworks
Enscape for Mac is available for: SketchUp, Rhinoceros, Archicad, and Vectorworks
Sentio simplifies client presentations with 360 VR
Sentio VR 2.0, the latest release of the VR solution for architects, has simplified client communication by enabling multiple stakeholders to join a single session and collaborate around 360 panoramas.
Architects can guide clients through presentations using high-fidelity content from real-time visualisation software including Lumion, Twinmotion, V-Ray, Enscape, and D5 Render in the Meta Quest VR headset, without the need for a high-end GPU, and without requiring clients to learn how to use VR controls.
Sentio VR 2.0 also includes ‘1-click casting’ which allows users to stream their VR view to a web link in seconds, eliminating the need for Meta accounts or Wi-Fi configurations. According to the developers, it solves the problem of streaming to a wider audience in client meetings through a simple link, providing
a high-resolution VR experience without any setup complications.
For 1-click casting users open a 360 tour in the Sentio VR Meta Quest App, click on ‘Broadcast tour’, note the PIN code, then enter the code at cast.sentiovr.com. The VR view is then automatically streamed to the web link in real-time.
Another new feature is the ability to download projects offline to the VR headset with a single click, so architects can take their headsets to meetings, trade shows or job sites without having to worry about Wi-Fi or 45/5G connectivity.
Sentio VR also supports real-time collaboration for fully navigable VR where users can walk and teleport around a building. The software integrates directly with Revit and SketchUp via plug-ins, which offer ‘one-click export of the model to the cloud for conversion to VR.
■ www.sentiovr.com
Newforma connects email with openBIM
Newforma has enhanced its Newforma Konekt project information management (PIM) platform with an updated Outlook addin that moves email communication from a single siloed inbox to a collaborative BIM environment using buildingSMART’s openBIM format, BCF (BIM collaboration Format).
With the new update, users can convert email threads into actionable BCF issues that can be tracked and managed within the BIM environment.
The software will also connect discussions to 3D models and project plans, with a view to bringing communication into the same workflows as the rest of the project. According to Newforma, it will also ensure all project stakeholders are on the same page with real-time updates.
The Newforma Konekt integration of email into openBIM supports a range of authoring tools, including Revit, Archicad, Civil3D, AutoCAD and Tekla Structures.
■ www.newforma.com
Scan partners with Inevidesk for VDI tech
Scan Computers has entered into a partnership with Inevidesk to deliver a highperformance virtual desktop infrastructure (VDI) solution to customers with demanding GPU requirements, as part of Scan’s Cloud portfolio.
Scan Virtual Desktop solution is designed to reduce the cost and complexity of IT infrastructure. Rather than investing in, and managing multiple GPUaccelerated workstations, the VDI solution provides a centralised pool of GPUs, enabling remote working, and simplified IT administration.
Inevidesk’s VDI technology is built around the concept of pods, a server with Nvidia workstation GPUs, high-speed networking and storage. With Scan’s Virtual Desktop solution, pods can be hosted on-premise or at one of Scan’s data centre partners. One pod can host seven users.
■ www.scan.co.uk
Nemetschek acquires GoCanvas
The Nemetschek Group has completed the acquisition of GoCanvas Holdings, a provider of field worker collaboration software designed to digitise traditionally paper-based processes, simplify inspections, improve safety, and maximise compliance. The acquisition will strengthen Nemetschek’s ‘Build + Construct segment, anchored by Bluebeam, by incorporating GoCanvas’ mobile data collection, data analytics, and workflow automation tools.
■ www.gocanvas.com
Corona connects to Vantage for real-time ray tracing
Chaos Corona 12, the latest release of the photorealistic rendering software for 3ds Max and Cinema 4D, introduces a new connection to Chaos Vantage, so designers can explore, render and animate their scenes in real time.
Designers can export Corona scenes to Vantage with a push of a button, bringing a new range of animation, real-time ray tracing and GPU rendering tools to their visualisation projects.
Vantage can be used for rapid
rendering, wider scene exploration, or to spark ideas that users can continue crafting with full photorealism in Corona.
“Speed and quality should always go hand in hand, especially in the visualisation world where compelling visualisations play a big role in winning people over,” said Phillip Miller, a VP at Chaos. “With this new support, Corona 12 users with Vantage can fulfil both needs at once, utilising real-time ray tracing to help them do more in the moment.”
■ www.chaos.com/corona ■ www.chaos.com/vantage
hsbDesign 27 for Revit launches
hsbcad has launched hsbDesign 27 for Revit, the latest release of the Revit-native software for offsite timber construction which can export fabrication data to a range of CNC machines including Hundegger, Weinmann, and Randek.
The new version includes several new features, such as integration with
Autodesk Dynamo, reduced file size, multilingual support, enhanced project information output, and the ability to create custom item container labels.
Integration with Autodesk Dynamo, the visual programming add-in for Revit, is designed to help automate repetitive manual tasks and reduce potential errors.
■ www.hsbcad.com
Bluebeam extends access to collaboration
Bluebeam has launched Studio Sessions collaboration and markups for mobile, so users can collaborate on and markup PDFs from both a web browser and mobile device, without having to download the Bluebeam Revu app to a desktop.
According to Bluebeam, this will be useful for Mac users, newly invited collaborators, and teammates on the go.
Bluebeam has also enhanced the algorithms for its AI-powered Auto Align capability for faster drawing comparisons.
■ www.bluebeam.com
Hexagon
acquires BIM firm Voyansi
Hexagon has acquired Voyansi, an AECO focused provider of BIM and VDC solutions, reality capture services and BIM workflow software development.
Voyansi’s services are used to digitise all asset types, including data centres, hospitals, industrial facilities and shopping centres, across the design, build and operate phases of their lifecycles.
“This acquisition builds on our strategy within Hexagon Geosystems division to accelerate the digitisation of the construction industry,” said Paolo Guglielmini, president and CEO, Hexagon. “The addition of Voyansi to our advanced portfolio of AECO solutions will help our customers further enhance sustainability, efficiency and collaboration during construction and enhance their effectiveness in operating and maintaining assets.”
■ www.voyansi.com
BIM Academy rebrands to Okana
BIM Academy, the built environment consultancy has announced its transformation into Okana.
“When BIM Academy was established, training was a fundamental part of our service offering, in recent years this training has progressed to learning and development and we felt we had outgrown the name BIM Academy,” said MD, Dr Graham Kelly. “The time is right to move to a new name and under Okana we can expand further and offer more services.”
■ www.okana.global
Quantum selects Zutec for construction management ROUND UP
Revizto investment
Revizto, a specialist in collaboration solutions for the AECO sector, has announced a minority investment from global growth equity investor Summit Partners. The partnership will focus on supporting Revizto’s team expansion, product development, and growth
■ www.revizto.com
Digital twins
Real estate manager and developer
BENO Holding AG has adopted dTwin from Nemetschek to create digital twins of the existing buildings in its portfolio. The aim is to improve maintenance of the buildings, most of which were built before the year 2000, so there is little digital data
■ www.nemetschek-dtwin.com
UK BIM Framework
The UK BIM Framework, the overarching approach to implementing BIM in the UK, has been upgraded. The content now aligns with recently released standards in the ISO 19650 series, and there’s a new AI capability to improve the online experience
■ www.ukbimframework.org
BIM for MEP
Cype, has launched version 2025 of is BIM software, which introduces new MEP features, including one for the design and analysis of airsource heat pump systems, combining the hydraulic system programs into a single program
■ https://info.cype.com
Gen AI Copilot
Egnyte, a collaboration platform for a range of industries including AEC, has launched Egnyte Copilot, an AI-driven assistant designed to accelerate and transform enterprise content collaboration. Users can engage in AI-powered conversations with their own private and trusted data
■ www.egnyte.com
‘Lightning Viewer’
Resolve, a specialist in collaborative VR for design review, has opened up beta access to its new web viewer built for large design and construction projects. The ‘Lightning Viewer’ allows users to open large BIM files directly in a web browser and in ‘one click’ follow anyone viewing the same model in VR ■ www.resolvebim.com
uantum Group, an Ireland-based property developer, has selected Zutec to manage its construction project data from a single platform.
QBy digitising building information and construction documents, Quantum will use Zutec’s document management system for planning, design, tenders, procurement, and plot tracking, including the ability to approve drawings for future developments and resolve issues on site as they arise.
“In a market where quality cannot be overlooked, we required a platform to differentiate ourselves from others and create a framework for quality-driven
processes,” said Patrick Shaughnessy, construction director at Quantum Group.
“Zutec fitted the bill in terms of an easyto-use platform that provides solutions, features and functionality that gives us more control over how we manage documents and information related to construction and quality for all our projects – all from one place.
“This will help us better manage site teams, site progress, suppliers, and subcontractors, and ultimately raise the standards of the quality and innovation across our developments.”
■ www.zutec.com
Atvero launches email management tool
CMap has introduced Atvero Mail, an email management tool specifically designed for the AEC sector. The software is built on Microsoft Outlook and offers automated email filing, and ‘powerful search functionality’ to ‘instantly find’ emails and attachments for any project.
“With email remaining a key communication tool when working on projects, there was a market need for an email management product that emphasized discoverability and searchability for AEC firms, whilst keeping users in their familiar Microsoft 365 environment. This is exactly why we’re developing Atvero Mail,” said Marcus Roberts, head of Atvero.
CMap acquired Atvero in February 2023 as a document, drawing and email
management solution. After extensive engagement with the AEC community, the decision was made to split Atvero into two products; Atvero Mail (launching late Q3 2024) and Atvero PIM (which will continue to operate as a document, drawing and email management solution).
“Since acquiring Atvero we’ve seen enormous market demand, with new legislation, such as the Building Safety Act, changing the approach AEC firms have toward their information management practices,” said Dave Graham, CMap.
“During this time, working closely with our customers and the AEC community we’ve identified the vital need for a standalone email management tool to provide people with an easier way to get started on their information management journey.”
■ www.cmap.io/atvero
AI NEWS BRIEFS
AI visualisation
Veras, the visualisation add-in for Revit, SketchUp, Rhino, Forma and Vectorworks, that uses 3D model geometry as a guide for generative AI, has added a zoom function to make it easier to select part of a high-res image to refine specific elements
■ www.evolvelab.io/veras
Construction tracking
AI Clearing, a specialist in construction progress tracking, has integrated with Oracle Aconex, the construction project management software. AI Clearing using Machine Learning to detect differences between design data and dronecaptured construction site data
■ www.aiclearing.com
Intelligent drawings
UK startup Modelizer is developing software to detect, understand and interpret what’s in a 2D drawing. It uses Machine Learning and a Convolutional Neural Network to detect and interpret walls, wall types, windows, doors, stairwells, voids, title blocks, and other annotations
■ www.modelizer.ai
Construction lead
According to a report by Unanet, construction firms are leading the way in smart AI adoption, with 41% using AI with oversight policies in place, compared to 18% of architecture firms and 23% of engineering firms. Many AEC firms report using AI but have no policies in place to guide its use
■ www.unanet.com
Vitras.AI
Cove.Tools has launched Vitras.AI, a new platform for architects that uses AI to automate complex tasks and generate reports. Users can sign up for free access to the platform’s initial modules - Zoning Studies, Cost Estimating, Energy Benchmarking, and Climate Analysis
■ www.cove.tools/products/vitras-ai
FURTHER READING
Diffusion models
Nvidia’s Sama Bali explains how this powerful generative AI technology can be applied to AEC workflows, and how AEC firms can get on board
■ www.aecmag.com/ai
Leading real-time viz tools get boost from generative AI
Chaos Enscape and D5 Render are using Generative AI within their respective real time visualisation tools to increase the visual quality of renders. The new beta ‘AI Enhancer’ features are designed to make specific elements within a scene more detailed and realistic.
In Enscape 4.1 Preview 6, Chaos AI Enhancer is designed to elevate the visual quality of Enscape’s people and vegetation assets, which are produced in-house to a strict budget of polygons, so users can place multiple assets without
experiencing a loss in performance. The software uses an AI engine that identifies which pixels should be enhanced.
In D5 Render 2.8, the AI Enhancer (beta) is focused on lighting, materials, characters, vehicles, and vegetation. Users can apply three intensity levels –weak, normal, and strong - while the ‘AI Enhancer Channel’ helps improve the accuracy of area selection, to provide ‘precise control’ over enhancements. Learn more on www.aecmag.com/AI
■ Enscape www.tinyurl.com/enscape-AI
■ D5 Render www.tinyurl.com/D5Render-AI
TestFit using ‘computational AI’ to optimise building layouts
TestFit, a specialist in feasibility software for property development, is gearing up for the launch of Generative Design, a new tool that uses ‘computational AI’ to optimise building layouts. The software enables designers to explore a multitude of design possibilities without having to learn how to script.
TestFit has been developing building optimisation technology since 2016. The company’s ‘Site Solver’ product allows users to generate site plans ‘instantly’ with real-time insights into design, cost, and constructability. Earlier this year the company released ‘Urban Planner’ a free urban planning tool with customizable massing tools (read this AEC Magazine article - www.tinyurl.com/Testfit-free)
Generative Design is the next step forward for TestFit, allowing AI to test site solutions, on its own, based on specific project requirements. It is said to work for sites of all scales from multi-family development to industrial buildings.
■ www.testfit.io
AI comes to architecture
With Veras, EvolveLab was the first software firm to apply AI image generation to BIM. Martyn Day talked with company CEO, Bill Allen, on the latest AI tools in development, including an AI assistant to automate drawings
Nothing has shown the potential for AI in architecture more than the generative AI image generators like Midjourney, DALL-E, and Stable Diffusion.
Through the simple entry of text (a prompt) to describe a scene, early experimenters produced some amazing work for early-stage architectural design. These tools have increased in capability and moved beyond learning from large databases of images to working from bespoke user input, such as hand drawn sketches, to increase repeatability and enable design evolution. With improved control and predictability, AI conceptual design and rendering is becoming mainstream.
With early text-based generative AI, the output took new cues from geometry modelled with traditional 3D design tools. The first developer to integrate AI image generation with traditional BIM was Colorado-based developer, EvolveLab. The company’s Veras software brought AI image generation and rendering into Revit, SketchUp, Forma, Vectorworks and Rhino. This forced the AI to generate within the constraints of the 3D model and bypassed a lot of the grunt work and skills related to using traditional architectural visualisation tools.
Veras uses 3D model geometry as a substrate guide for the software. The user then adds text input prompts and preferences, and the AI engine quickly generates a spectrum of design ideas.
Veras allows the AI to control the override of BIM geometry and materials with simple sliders like ‘Geometry Override’, ‘Creativity Strength’ and ‘Style Strength’ — the higher the setting, the less it will adhere to the core BIM geometry. There are additional toggles to help, such as interior, nature, atmosphere and aerial view. But even with the same settings you can generate very different ideas with each iteration, which is great for ideation.
Once you have decided on a design, to further refine you select one of the outputs as a ‘seed image’. This allows more subtle changes to be made with user prompts but
without the radical design jumps, such as colour of glass or materials.
A recent addition is the ability to create a selection within an image for specific edits, like changing the landscaping, floor material, or to select one façade for regeneration. This is useful if starting from a photograph of a site, as the area for ideation can be selected.
Drawing automation
EvolveLab is also working on Glyph Co-Pilot, an AI assistant for its drawing automation tool, Glyph, that uses ChatGPT to help produce drawings.
Glyph is Revit add-in that can perform a range of ‘tasks’ including Create (automate views), dimension (auto dimension), tag (automate annotation), import (into sheets) and place (automate sheet packing).
These tasks can be assembled into a ‘bundle’ which ‘plays’ a customised collection of tasks with a single click. One can automate all the elevation views, auto-dimension the drawings, auto-tag the drawings, automatically place in sheets and automatically arrange the layout. Within the task structure things can get complex, and users can define at a room, view or sheet level just what Glyph does. Once mastered Glyph can save a lot of time in drawing creation, but there are a lot of clicks to set this up.
With Glyph Co-Pilot, currently in closed
beta, the development team has fused a ChatGPT front end to the Glyph experience, as Allen explains. “Users can write, dimension all my floorplans for levels 1 through 16 and elevate all my curtain walls on my project and it will go off and do it.
“I can prompt the application by asking it to elevate rooms, 103 through 107, or create enlarged plan of rooms of 103 through 107. Glyph Co-Pilot understands that I don’t have to list all the rooms in-between,” he says.
Co-Pilot is currently limited to Revit but in the future it will be possible to plug into SketchUp, Archicad, Rhino etc. This means one could to get auto-drawings direct from Rhino, something that many architects are asking for.
But how do you do this, when Rhino lacks the rich meta data of Revit, such as spaces to indicate rooms? “There’s inferred metadata,” explains Allen. “Rhino, has layers and that’s typically how people organise their information and there’s obviously properties too. McNeel is also starting to build out its BIM data components. I expect some data hierarchy that will start to manifest itself within the platform which we can leverage.”
■ www.evolvelab.io
An extended version of this article, which includes Allen’s thoughts on the impact of AI within AEC, can be seen at www.aecmag.com/AI
Generative AI for urban simulation
Urbanly, a specialist in urban simulation, recently integrated Large Language Models (LLMs) into its CityCompass platform to improve how housing units are matched to demand. We asked founder and CEO Federico Fernandez to explain the process that led to this development and the challenges of harnessing the power of generative AI in household location choice models
Comprehending the factors that influence household location choices is pivotal for analysing urban areas through diverse lenses. From a governmental standpoint, it is essential to grasp these decisions when assessing the potential impact of infrastructure projects on residential areas. Concurrently, the real estate sector relies heavily on anticipating household location preferences to forecast growth trajectories accurately. Unravelling the intricate dynamics behind where families and individuals opt to reside becomes a cornerstone for informed policymaking, strategic planning, and market intelligence within the urban landscape.
For the past five decades, computer scientists have dedicated their efforts to developing software capable of analysing land use patterns, continuously refining its sophistication as advancements in hardware and software engineering paved the way. Drawing upon this extensive experience, Urbanly has developed
CityCompass, a cloud-based land use simulation platform that harnesses the latest available technologies to create an environment conducive to policy experimentation. At the heart of its simulation engine lies the household location choice model, a powerful tool that seamlessly matches available housing units with their corresponding demand.
Generative AI: a measured approach
The advent of generative AI has prompted us to actively explore avenues for integrating this technology to complement and enhance our simulation kernels. However, a crucial challenge in this integration process has been to circumvent a common pitfall associated with generative AI – treating it as an infallible ‘source of truth’ instead of a model that can be trained to learn from real-world decisions and existing academic literature. Our approach with this experiment has been to harness the power of generative AI while recognising it as a dynamic
tool that must be continuously refined and calibrated against empirical data and established research findings.
LLMs: challenges and advancements Neural networks have been around for a long time, and we experimented with them a couple of years ago, creating small networks for decision choice, in combination with genetic algorithms with an evolving DNA. However, we swiftly encountered formidable challenges on two fronts: firstly, training custom neural networks necessitated an extensive process of trial and error to determine the optimal network architecture; secondly, the computing power available at that time posed constraints, rendering the creation of large-scale networks unfeasible. Compounding these hurdles was the absence of dedicated hardware tailored for neural network applications.
The emergence of generative AI, particularly large language models (LLMs), has reignited our belief in the potential of this technology, prompting us to re-evaluate its applicability. The advent of LLMs appears to address, to a certain extent, the two primary challenges we encountered during our previous endeavours with neural networks. Firstly, LLMs mitigate the need for intricate customisation of neural network architectures, as their pre-trained models can be fine-tuned and adapted to specific tasks. Secondly, these models harness the immense computing power facilitated by dedicated hardware, alleviating the constraints we previously faced due to limited computational resources.
The household location choice model
A household location choice model (HLCM) is a software component that
matches available housing units in an urban environment with households, considering its main metric how well it characterises the decisions that individuals undergo when selecting their place of residence.
For taking these decisions, two types of knowledge are required: dynamic data generated by the simulation as it runs and static training data about how these decisions are typically taken in a particular urban environment.
The static side is the easiest — since the most common format for domain knowledge and training data is written text. We carefully chose literature describing varied approaches to HLCM complemented by descriptions of real world thought processes specific to the study area and incorporated them to the LLM.
On the dynamic side, given the well-documented challenges that LLMs face in processing numerical inputs, we have undertaken the development of a novel component within CityCompass, aptly named the ‘simulation entity to prompt converter’. This module serves as an intermediary, translating the intricate simulation entities into a format that aligns with the linguistic paradigm of LLMs, enabling seamless communication and integration.
Internally, CityCompass represents housing units with typed data structures composed by spatial and non-spatial attributes, such as street address, area or number of rooms. All that information can be translated to English text that can be understood by LLMs. However, there is an additional challenge: attributes like spatial location doesn’t add significant information to a model without all the associated data layers that predicate over that particular polygon on space. In con-
crete, what is important about a housing unit location is what is its accessibility, how close is to certain points of interest or spatial detractors. Then, we needed to augment the text-based description of the unit with all these additional layers.
In addition to characterising the supply side of housing units, we recognised the necessity of developing a dedicated component to produce LLM input about the diverse attributes and dynamics of individuals constituting households. This component describes parameters such as ages, occupations, and activities. Within the context of an Urbanly simulation, these intricate household profiles are derived from census data, which is further enriched and augmented by a sophisticated synthetic population generator.
One last simulation time data that must be shared with the LLM are current and future context factors, including the macro-economic model that is part of the simulation and planned development projects, especially in terms of future years, since expectations are a big driver for location decisions.
Having shared all the relevant information with the LLM, we focused on tailoring specific prompts to get the location decisions we needed. It is crucial to note that this process is not static in nature, as the component responsible for generating these decisions must be invoked each time a new housing unit becomes available within the simulation, whether due to relocation or the construction of additional housing stock.
As soon as we put all the pieces together, we began running simulations in study areas where we have been working in the past, and comparing results, focus-
ing on occupancy rates and growth spatial patterns. We then experimented with different “domain knowledge scenarios”, what means considering the policy simulation as static, but varying the papers that we use to explain how location choices are taken to the LLM. This allowed us to create scenarios that favour a particular policy vision of an author, to understand how that could affect our forecasts.
Conclusion and future directions
This work has demonstrated an innovative approach to integrating generative AI, specifically LLMs, into urban simulation and modelling. By leveraging the capabilities of LLMs while carefully addressing their limitations, we have developed a novel methodology for simulating household location choice decisions within the CityCompass platform.
Furthermore, this work has highlighted the importance of incorporating diverse sources of knowledge, both dynamic simulation data and static domain knowledge from literature and real-world observations. By carefully curating and presenting this information to the LLM, we can generate diverse location decisions. Looking ahead, this work paves the way for further integration of generative AI techniques into urban simulation and modelling, opening up new avenues for exploration and innovation. As the field of generative AI continues to evolve, the proposed methodology can be adapted and refined to leverage the latest advancements, further enhancing the accuracy and robustness of urban simulations, ultimately contributing to more informed and data-driven policymaking and urban development strategies.
■ www.urbanly.org
Differences in building occupancy according to the HLC model used
Connecting architecture to fabrication
The chasm between architectural and fabrication design software creates challenges for firms wishing to go beyond the boundaries of traditional documentation. Martyn Day looks at how three pioneering firms — SHoP Architects, Bouygues Construction and WSP — are bridging the divide
The AEC industry has a reputation for being slow to adopt technology. Some reports even place it behind farming. The reality is, while construction has lagged, design has been on an inexorable path to total digitisation since the 1980s.
3D modelling, the adoption of BIM and innovation in digital fabrication is ultimately going to lead to modern methods of manufacturing buildings.
This is not just a technology play; it’s borne out of necessity. The construction industry lacks skilled labour, many economies desperately need new housing, historic poor productivity needs to be addressed. Furthermore, everything from the design, material choice, and location to the fabrication of buildings needs to reflect the carbon climate challenges that will only become more prescient.
To connect the digital thread, this industry needs new tools, new workflows, new fabrication methodologies. In short, and to coin an overused phrase, we need to rethink construction.
However, it’s not just construction that needs rethinking - it’s everything from what we design, through to how it’s fabricated, and how it’s assembled. We need to rethink AEC.
Buildings should be designed in the full context of how they will be fabricated, broken down into assemblies and a ‘Kit of Parts’. Today’s BIM tools add width to the chasm that separates design from fabrication, as they were created to deliver scaled drawing sets, not detailed 3D models for fabrication.
When one tries to add that level of detail, the models swell in size and become unusable. Autodesk has arguably done the most to try and connect BIM and manufacturing CAD, but this has taken years and many attempts to get right.
The current solution boils down to proxy swapping of predefined components between BIM software Autodesk Revit and mechanical CAD (MCAD) software Autodesk Inventor, which lends itself to working with a ‘Kit of Parts’ mentality. This solution, while innovative, is a partial ‘band aid on a bullet wound’, trying to overcome the integration limitations of two products that were never intended to work together.
Many fabs have shut down. It’s all too easy to find examples of how not to do it, rather than ones that are making it work. But this time of failure will pass, and lessons will be learnt.
The convergence of design and manufacturing in AEC is going to be an ongoing experiment and it’s going to need
‘‘
they are refining to connect architecture to construction and deliver digital design to fabrication. Dale Sinclair from WSP, Antoine Morizot from Bouygues Construction and John Cerone from SHoP Architects were three that stood out, taking us from London to Paris to New York.
WSP - London
Today’s BIM tools add width to the chasm that separates design from fabrication, as they were created to deliver scaled drawing sets, not detailed 3D models for fabrication
some projects that require scale to prove out. This is happening, but it’s not necessarily joined up. The future is everywhere, it’s just not evenly distributed.
Those who follow the offsite construction market, will know that it has become a bloodbath in the US and UK.
At AEC Magazine’s NXT BLD and NXT DEV last month we brought together some industry change makers, who presented the projects and processes that
Dale Sinclair first started experimenting with design at 1:1 construction detail level while at AECOM. At the time AECOM was working closely with offsite fabrication firms and Sinclair wanted to ‘talk the same language’ as the fabricators. He eschewed Revit for architectural design on modular projects, and instead adopted Inventor to take an assembly approach.
Now at WSP, Sinclair has continued his research into the convergence of construction and manufacturing, developing
new processes and looking at the whole workflow from the construction end of the telescope, bringing ‘systems thinking’ to design.
At NXT BLD, he pointed out, “Construction in its current form, cannot continue, because we have all tried but we haven’t moved the dial. We haven’t reduced the cost. We haven’t reduced the time to deliver buildings. The quality is variable, and productivity is static.
“How do we change? We keep putting things together that have never been put together before. The number of new systems are increasing, adding complexity.
“We should be using offsite manufacturing. There is no downside to using
factories. We have better safety, bring in more diverse people and get the benefits of scale. We should be leveraging the benefits the manufacturing sector has had for years. But the one thing we have not cracked with offsite is cost and this prevents us from scaling up offsite.”
Sinclair explained that adopting a ‘Kit of Parts’ approach in design is phase one. The next step is to mobilise offsite, by taking a small number of large components to site (panelisation not modular).
This can then be followed by adopting a broader ‘program mentality’, using fabrication-level details at the start of a project, pushing manufacturing information upstream and adopting configurators.
“We have flipped the entire process on its head, so we are coming from a manufacturing first [approach] and it’s a game changer,” he said.
Sinclair believes off-site has to be explored at a country level. He thinks that offsite fabrication spaces should be distributed throughout the UK, in all the places there is unemployment, and hopes the UK Government wakes up to the benefits of doing something like this. I suspect we will have to wait to see it work somewhere else first.
Watch
Sinclair’s talk www.tinyurl.com/NXTAEC-sinclair
Facit Homes - bringing the factory to the construction site
Bruce Bell has a long connection with AEC Magazine and NXT BLD. His UK company Facit Homes uses vanilla Revit with its own family of parts, which are optimised to create highly defined BIM models. Through a secret sauce, they are flattened and G-code is created to fabricate on-site via a router in a shipping container.
In a way, Bell has developed his own expert system that is designed for houses out of mainly one material, that is cut up on site and nailed together to make box sections. Every building created for individual clients is a variation on a long-tested system. With a deep central resource database of the common
products used in fitting out, Facit can predict the cost of its buildings within 1%. As the company also manufactures and assembles the building, that reduces risk and means the company’s fee spans design, construction and delivery. What is incredible is that this all done with off-the-shelf software.
Recently Bell has raised his aim and is looking to develop a giant robot which can cut and stack enough panels to build out entire estates. His talk at NXT BLD highlights the journey he has been on and the solution that he will be bringing to market, which features an onsite micro-factory. Watch Bell’s presentation at www.tinyurl.com/NXTAEC-Bell
Bouygues Construction - Paris
From the other side of the channel, construction giant Bouygues Construction has been on its own digital journey. It has similar challenges, but instead of focusing on the original architectural design, it concentrates on how to connect its clients’ design information to the Bouygues fabrication and cost estimating system.
The decision to digitise and automate has led to a multi-year consultancy engagement with Dassault Systèmes - creator of the leading MCAD brands Catia and Solidworks – to create an expert system for Bouygues called ‘Bryck’.
Bouygues’ strategic vision is to head towards metamorphosing building sites to a place where products are assembledunlike the current process, which requires the onsite transformation of materials. Antoine Morizot of Bouygues explained, “The products could have been prefabricated or assembled in micro factories near the site, but the idea is not to standardise the products, it’s to standardise the processes.”
The concept that Bouygues is adopting is not dissimilar to the ones which Dassault Systèmes has proven many times in the manufacturing space for aerospace and automotive. Here customers build a virtual digital mock-up or in common CAD parlance, a digital twin, which contains all the details of what is to be manufactured, to simulate the method of construction, the construction site and the as-built.
Morizot stated that BIM has failed to give the result the industry was expecting. By modelling in 3D, there was an expectation that, like in MCAD, this data
could be connected to fabrication systems. BIM data conveys the idea, but not from an engineering or construction point of view. To achieve this, Bouygues has built a ‘productised’ system which covers all these bases, using Catia and customisation to produce a predictable, systemic view of project data.
RVT or IFC models are brought in and converted to productised Catia components such as groundwork, structure, covering, partitioning, finishes, MEP, equipment and prefab modules.
This template-based system also offers a library of parametric templates, which pre-define multi-disciplinary parametric modules for central cores, CLT floors, façade design, electrical components, MEP etc, which can adapt to any complex geometry, or imported IFC or Revit files. These ‘products’ adapt to the architectural model, through the use of generative design, adding tags, attributes, dimensions, 3D annotations, surface treatments, manufacturers’ catalogue part numbers, integrating a lot of data making calculations, and even defining the installation order.
Morizot demonstrated that by simply clicking on the raw geometry of a floor in a model, Bouygues can apply a product, in this case a CLT floor, and a complete, highly detailed CLT floor is created, adapting to the new model, ready panelised to fit the capabilities of Bouygues’ in-house fabrication machines. These can then be edited in multiple ways, such as orientation and installation order.
This was a rare outing for Bouygues to explain the level of detail it has achieved with its construction expert system. It
means the firm can be given an IFC or a Revit model and in minutes get a fabrication level digital twin, with the exact cost and all the fabrication drawings. Bryck has impressed the company’s board so much that another long-term deal has been signed with Dassault Systèmes, with more capabilities to come.
Morizot’s talk www.tinyurl.com/NXTAEC-Morizot
SHoP Architects - New York
New York-based SHoP Architects is a relative latecomer to the NXT BLD roster, but principal John Cerone has been a long-time advocate of embracing digital fabrication and going beyond the limitation of delivering drawings.
Cerone is certainly in the architectural camp of wanting to get rid of drawings and move to a pure modelling paradigm and the practice is doing its utmost to define its own process to connect architecture with modern methods of construction. He defines his firm as ‘Production Architects’ as they focus on materials, process and how the buildings they design are made.
Cerone is on board with offsite manufacturing and the concept of a ‘Kit of Parts’, sub-assemblies and how these
Watch
work in the design as a whole. SHoP is not a fan of the plan, section and elevation approach to define its schemes but places itself in the Ikea approach to communication.
Cerone stated, “Architecture, with a capital A, can be designed and manufactured. To do that, you need to understand the processes, who/what is reading the instructions and how the materials are being processed. When you do that, the deliverables are not the flat orthogonal drawings, we can be much more diagrammatic.
“We operate in this industry with the mentality that there’s an opportunity to leapfrog and take advantage of advanced manufacturing techniques in our projects. If you come to our office in downtown Manhattan, in the Woolworth Building, the first thing you’ll notice will be model planes, boats, cars - they are everywhere.
“And for us, one of the principles behind that, is how you can design simulate, coordinate, execute a complex project in a digital format. To do that, you have to get outside of the traditional tools of the AEC industry.”
SHoP Architects is a big fan of Rhino and Catia and builds a lot of its own tools for geometry solvers which are used at scale. The firm has cut its teeth
working on projects with complex geometry that were built off site. This involved talking with manufacturers at the concept stage to start optimising for fabrication.
Information, such as optimal steel sheet size helps reduce waste, lower cost, and act as design constraints early on in the process. This means the final design does not need reengineering once all the work is done. SHoP then makes reuseable templates for the assemblies it creates.
From this, SHoP has got heavily into the fabrication side of things, sometimes bypassing the drawing phase and even just delivering the model and the G-code, whilst keeping track of job tickets through factories. On site the firm uses laser scanning to ensure the assemblies are to specification and communicates installation through screen shots of the model.
With all this experience in offsite and manufacturing, SHoP Architects creates a ‘cousin’ company called Assembly OSM to deliver modules for high-rise residential buildings (12 to 30 floors), all based on templates for building systems, including mechanical.
Conclusion
There are no off-the-shelf solutions to link digital architecture to digital fabrication. Every firm that has made progress connecting the two worlds has done it through belief, investment, experimentation and sheer bloody-mindedness. However, it doesn’t mean that this situation won’t change as lessons learnt by these pioneers will eventually find their way back into features in standard software. The use of Inventor and Catia for defining architectural design is currently niche but there is a chance that next generation BIM tools will have the underlying technologies required to span the design to fabricate chasm.
Defocussing from technology solutions for a moment, it’s clear from Sinclair, Cerone and Bell - all architects - that to wholly embrace the process, the industry needs to start thinking about the design of buildings differently.
If designs are to flow from concept to construction, the ‘Kit of Parts’ approach appears to have the longest legs, mimicking automotive and aerospace. But this still puts a lot of work upfront to design flexible parts with construction-level detailing. As Sinclair points out, less project think and more of a program mentality, spanning projects. Bell has done this with the Facit adaptable chassis, where every house is a variation on a tried and tested theme.
Things we learnt at NXT BLD / DEV
As we catch our breath after an inspirational two days at London’s Queen Elizabeth II Centre in London, we reflect on some of the key themes to come out of NXT BLD and NXT DEV this year
Architecture is getting closer to fabrication 1
There’s huge push to link architecture with fabrication. At NXT BLD, six leading firms shared their ground-breaking work.
WSP is optimising ‘Kit of Parts’ workflows to streamline industrial construction, aiming for repeatable, efficient results. Bouygues Construction has teamed up with Dassault Systèmes to develop ‘Bryck’, an expert system that can take a Revit model, break it down, and produce all the fabrication drawings and costs.
SHoP Architects in New York is closing the gap between design and construction using tools like Revit, Rhino and the 3D Experience Platform. Intel’s modular approach is speeding up its global silicon fab construction.
Meanwhile, in the UK, Facit Homes is scaling up its onsite factory so it can handle housing estates as well as individual homes, while Space Group is developing expert house building systems for Travis Perkins and TopHat.
We are in the age of the ‘activist customer’, who is turning to start-ups to deliver new capabilities that can be used in anger.
In only the second year of NXT DEV we delivered a ‘world’s first’ on stage, a collaborative effort showcasing the development of a school project from concept to drawings, joined together by new software tools: Skema, Snaptrude, Augmenta, Gräbert, with Esri GIS. Connected mainly through
Community is king 3
While the presentations are carefully procured, NXT BLD and NXT DEV would be nothing without its amazing community. We would love to thank everyone for bringing their ideas and endless energy.
When navigating the Queen Elizabeth II building, you are never too sure who you’ll meet up with next —an event speaker, past or present, a strategy lead for one of the major software companies, the head of
cloud API calls a project was tackled using productivity enhancing applications which crushed the time for: ideation, detail modelling, structural design, automated electrical routing, collaboration, editing and ultimately output to auto drawings.
Plenty of other start-ups showed off their latest developments including ShapeDiver. Spacio, Consigli, Sparkel, SpaceForm and Qonic and Swapp.
global workstations for Lenovo, the director of research from a signature architect or a specialist investor in AEC software.
Sharing opinions is highly encouraged and you will be surprised at just how many firms share views on technology, product fit, pricing, licensing and what technologies to watch. With your help, we genuinely believe NXT BLD and NXT DEV have become the talking shops for activist customers.
Scott Pritchard Industrial Light and Magic
Automation is closer than you think
New BIM code streams like Snaptrude, Qonic, and Arcol are leveraging next-gen cloud technology for BIM authoring. Developers are also using AI for automation, as seen with Swapp and expert systems for specific building types like HighArc. Two main tracks have emerged: those aiming to replace Revit and monolithic BIM, and those adding automation to the workflow, reducing timelines and increasing value.
Skema integrates with established BIM tools, allowing users to model with predefined assemblies to achieve detailed models quickly. Augmenta auto-wires buildings and will soon include MEP. The most transformative new tech is auto drawings, led by Swapp and Gräbert, and also being developed by Bentley, Autodesk, Nemetschek, and EvolveLab, potentially halving drawing production work. NXT DEV united the key players in this field.
Panelists not afraid to speak their minds
There is nothing worse than going to an AEC technology conference where every talk is a product pitch. NXT BLD and NXT DEV are not about that. If someone is talking about their product or technology, it’s because they are a hot start-up that we rate, or have something very cool in development.
We try to put on panel discussions which include knowledgeable voices from both practice and development. Sometimes we
have to address a topic that is negatively impacting the industry and sparks will fly. At NXT DEV this year, the Pricing, Licensing and Business Models panel was a case in point, as HOK, Grimshaw, BIG and Buro Happold let rip.
Check out our other 2024 talking points: Design Automation, Auto Drawings, Open USD, Openness in AEC, AI in general practice, Digital Fabrication and BIM 2.0.
A recurring theme each year is data and the conversations are getting more frequent, as firms assess the impact of moving away from files to a data centric world.
At NXT BLD, Autodesk announced its first steps to granularisation of BIM, Greg Schleusner of HOK gave his traditional update on how the industry can harness smarter, open data schemas to take back control, and Julien Moutte, CTO of Bentley
Systems reinforced the move to open data for the industry.
While many of the for-profit, publicly listed CAD vendors are speaking about openness, we were honoured to host Antonio González Viegas CEO of That Open Company and Francesco Siddi the General Manager of Blender on hand to add true ‘open reality’ to the conversation.
There’s nothing better to brush off the morning cobwebs than a trip into hyperspace. Emmy award nominated visual effects (VFX) supervisor, Scott Pritchard of Industrial Light and Magic, put a Jar Jar Binks-sized smile on everyone’s face as he revealed the VFX secrets of the Star Wars Universe. Digital artistry and VFX know-how met photogrammetry, green screens, and giant LED walls as he inspired NXT BLD’s Jedis.
Virginia Senf Autodesk
Andy Watts Grimshaw
Patrik Schumacher Zaha Hadid Architects
Chantal Matar
Studio Chantal Matar
AI IN VISUALISATION TRANSFORMING ARCHITECTURE
Artificial Intelligence (AI) is having a profound impact on architectural visualisation. In this special report, learn how to turbo-charge your viz workflows with the latest AI technologies, powered by Lenovo™ ThinkStation® and ThinkPad® workstations with NVIDIA RTX™ GPUs
Architectural visualisation has long been pivotal in bringing architectural designs to life. It enables designers and visualisation specialists to create realistic images, animations and realtime experiences of projects for better understanding, clearer communication, and compelling presentations. With the advent of artificial intelligence (AI) and AI-optimised Graphics Processing Units (GPUs) this field is undergoing a transformative evolution.
Renders that used to take minutes or hours now happen in real time. Concept
design now goes beyond the sketch with generative AI producing incredible visuals to take designs in bold new directions.
All of this is made possible by AIoptimised Lenovo ThinkStation and ThinkPad P Series workstations with powerful NVIDIA RTX professional GPUs.
Rather than simply relying on ‘brute force’ processing, AI makes smarter use of modern compute resources.
Tensor cores in NVIDIA RTX GPUs, for example, are dedicated entirely to AI processing. They dramatically improve performance in real-time tools like Enscape, D5 Render, Chaos Vantage, NVIDIA
Omniverse, and Unreal Engine, and can also slash render times in production renderers like Chaos V-Ray GPU.
NVIDIA RTX Tensor cores are also instrumental in accelerating generative AI tools like Stable Diffusion, which can produce high-quality images from textual descriptions.
In this ‘AI in visualisation’ special report we’ll explore how software is changing, how architects can benefit from new AIaccelerated workflows and how to ensure you have the right workstation hardware in place to get the most out of these exciting new technologies.
GEN AI: DESIGN INSPIRATION
Generate concept designs beyond the imagination and go from sketch to compelling visuals in record time
Generative AI, which involves AI models that generate text, images, and other content from the data they were trained on – burst onto the scene in 2022 and is already having a profound impact on architectural visualisation.
The most common application within architectural design is text-to-image generative AI, using software such as Stable Diffusion and Midjourney. Here an architect can quickly generate hundreds of images of architectural ideas, simply by crafting natural language descriptions, called prompts.
Text-to-image generative AI is especially useful during the early design phase, where quick exploration and iteration is critical. Traditional sketching or 3D modelling is typically constrained by the speed with which one can stroke a pen or move a mouse. With generative AI the results come back in a matter of seconds, allowing
architects to explore more options than ever before.
Using AI for conceptual design also opens up new possibilities when it comes to creativity. By blending styles, design ideas generated by AI often go beyond the imagination of the architect, leading to new architectural vocabularies.
Of course, text-to-image generative AI does not come without challenges. Crafting a good prompt demands skill, and getting the desired results requires refinement and trial and error. The resulting images are often low-resolution but can be upscaled using AI.
The technology is advancing at an incredible rate. There are now new generative models and tools that provide much more control over the output.
Stable Diffusion, which is very well suited to architecture, offers several add-ons to help push output in specific directions.
The first is ControlNet, a neural network
extension that allows the user to guide the AI through the composition of an uploaded image. This could be a sketch, render, photo, or even a screen grab of a concept model in SketchUp. There are several ways the AI can infer composition from the image — through depth, line art, or MLSD (Mobile Line Segment Detection), which detects straight lines, so is well suited to architecture.
The second is LoRA (Low Ranked Adaptation), a training method that is used to fine tune Stable Diffusion models by capturing the unique styles and attributes of a set of images. One could, for example, train a LoRA on a set of renders to mimic a specific architectural style.
LoRA training is very GPU intensive, and can take some time to master. The good news is there’s an active community sharing LoRAs on www.civitai.com. Ismail Seleit is a London-based architect whose LoRAs include one for ‘contemporary modular refined designs with realistic materials’, another for architectural scale models, and even one which gives the look and feel of a hand sketch.
Open source tools like Stable Diffusion deliver excellent results, but there can be a steep learning curve. Several AEC software developers are building generative AI tools that are designed specifically for architects with a simplified set of commands that are directly accessible inside BIM software. For example, Nemetschek’s AI Visualizer integrates Stable Diffusion inside Archicad, Vectorworks and Allplan.Veras is an AI renderer for SketchUp, Rhino, Revit, Forma and Vectorworks. There’s also SketchUp Diffusion from Trimble. All three tools use simple 3D concept models, text prompts and sliders to guide the AI.
GETTING STARTED WITH STABLE DIFFUSION
Stable Diffusion is one of the most popular tools for generative AI in architecture. It’s not only hugely powerful, but has the added benefit of being open source. This means it’s free to download and run on local hardware, so you can take full advantage of
your workstation’s powerful NVIDIA RTX GPU without having to pay for cloud GPUs. To help workstation users get up and running quickly NVIDIA has published a step-by-step guide. The guide explains how to install the software and use the TensorRT extension for
Stable Diffusion Web UI, using Automatic1111, the most popular Stable Diffusion distribution.
According to NVIDIA, the extension doubles the performance of Stable Diffusion by leveraging the Tensor Cores in NVIDIA RTX GPUs.
ARCHICAD AI VISUALIZER
AI FOR RENDERING (DLSS)
Turbocharge your viz tools with smart AI technologies that bring new efficiencies to real-time rendering
In the realm of architectural visualisation, it’s a continual challenge to strike a balance between realtime interactivity and photorealism. However, advances in Graphics Processing Unit (GPU) technology and the integration of artificial intelligence (AI) have significantly transformed this landscape. Applications such as Enscape, Chaos Vantage, D5 Render, NVIDIA Omniverse and Unreal Engine are at the forefront of this revolution, harnessing the power of modern workstation GPUs and smart AI processing to deliver stunning visualisations with unprecedented speed and accuracy.
While generative AI may have only recently catapulted AI into the mainstream, AI has for many years played a pivotal role in advancing real-time architectural visualisation. NVIDIA Deep Learning Super Sampling (DLSS) technology has been at the forefront of this revolution.
WHAT IS NVIDIA DLSS?
NVIDIA DLSS launched in 2018, alongside the first NVIDIA RTX GPUs, which introduced dedicated AI hardware known as Tensor Cores. DLSS has now evolved into a set of AI-accelerated technologies that software developers can integrate into their real-time visualisation tools to boost performance. This is often measured in Frames Per Second (FPS).
The exciting thing about NVIDIA DLSS is that it takes a smarter approach to graphics processing. Rather than simply throwing a bigger GPU at the problem, DLSS increases frame rates by using Tensor cores to bypass the traditional graphics pipeline with no discernible loss in visual quality. This not only means real-time experiences become smoother but the workstation can handle more complex, visually rich models.
The latest incarnation, NVIDIA DLSS 3.7 includes three distinct technologies: Super Resolution, Ray Reconstruction
and Frame Generation.
DLSS Super Resolution boosts performance by using AI to output higher resolution frames from a lower resolution input. In short, one can get 4K quality output, while the GPU only renders frames at FHD resolution.
DLSS Frame Generation boosts performance by using AI to generate more frames. It only works on Ada Generation GPUs. The Tensor cores process the new frame, and the prior frame, to discover how the scene is changing, then generates entirely new frames without having to process the graphics pipeline.
DLSS Ray Reconstruction enhances image quality by using AI to generate additional pixels for intensive ray-traced scenes. It replaces hand-tuned denoisers with an NVIDIA supercomputer-trained AI network that generates higher-quality pixels in between sampled rays.
Chaos Vantage, D5 Render, Unreal Engine, and NVIDIA Omniverse were among the first real-time visualisation tools to integrate DLSS Ray Reconstruction, featuring AI-enhanced real-time preview modes with ray tracing.
According to NVIDIA, with both DLSS Frame Generation and Ray Reconstruction enabled in D5 Render, FPS in the viewport increases by 2.5x, enabling incredible resolution and visual quality in huge scenes. Other architectural visualisation tools are using AI in different ways. Chaos V-Ray, which is renowned for its production quality output, is using NVIDIA AI denoising to decrease grainy spots and discolouration in images while minimising the loss of quality. The NVIDIA Omniverse RTX Accurate (Iray) renderer also uses AI for denoising.
AI WORKSTATIONS FOR VIZ
Lenovo workstations with NVIDIA RTX GPUs are AI-ready and optimised for the most intensive AI tasks
Architectural visualisation has some of the most demanding workflows in AEC, especially now that AI is accelerating viz software in many different areas. In order to take full advantage of these transformative technologies — now and well into the future — it is more important than ever to have the right workstation hardware in place.
The entire range of Lenovo ThinkStation and ThinkPad P Series workstations with powerful NVIDIA RTX GPUs are ‘AIready’ and purpose built to accelerate the most challenging AI workflows. This
includes a wide range of tasks for AI visualisation, as well as custom AI training, development and inferencing.
For visualisation, the Lenovo ThinkStation P8, with support for up to three NVIDIA RTX GPUs, can handle the most intensive workflows. This includes training LoRAs for Stable Diffusion or rendering V-Ray scenes at lightning speeds.
Not all architects need such powerful workstations to take advantage of AI in visualisation. The compact Lenovo ThinkStation P3 Ultra SFF, for example, provides a great entry point for generative AI and AI-accelerated real-time viz.
What’s more, designers do not need to be tied to their desks. All ThinkStation workstations can be rack mounted and set up for remote access, and with Lenovo ThinkPad mobile workstations, designers can work from anywhere.
Of course, Lenovo workstations are much more than their constituent parts. Their renowned build quality is designed to withstand the rigours of professional use, while running cool and quiet, and with impressive reliability. This is critical for all viz workflows but especially for those that involve intense computation, such as training AI or rendering out 4K videos.
AI viz workflows
• Entry-level BIM-centric real-time viz workflows
• Image generation in Stable Diffusion
AI viz workflows
• Mainstream real-time viz with more complex datasets and at higher resolutions
• Image generation in Stable Diffusion
AI viz workflows
• High-end real-time viz and production rendering with the most demanding datasets
• Image generation and LoRA training in Stable Diffusion
AI viz workflows
• Entry-level to mainstream real-time viz workflows
• Image generation in Stable Diffusion
NVIDIA RTX GPUs - ACCELERATING AI VIZ WORKFLOWS
NVIDIA RTX GPUs, at the heart of Lenovo workstations, are a critical component for AI, and arch viz in general. Tensor cores within the GPUs are dedicated entirely to AI processing to massively accelerate text-to-image generative AI tools like Stable Diffusion, and dramatically boost performance in real-time viz software through NVIDIA DLSS.
NVIDIA RTX GPUs also feature RT cores for ray tracing to simulate the behaviour of light, and CUDA® cores for general tasks, including rasterisation, to turn complex viz model geometry into pixels.
For visualisation workflows, one of the most important characteristics of NVIDIA RTX GPUs is the amount of on-
board memory. The NVIDIA RTX 2000 Ada Generation GPU comes with 16 GB, which is a good starting point, but for the most demanding workflows, there’s the NVIDIA RTX 6000 Ada Generation with 48 GB.
GPU memory is required to load up complex viz datasets, including textures. Memory demands grow when models are viewed or rendered at higher resolutions.
Stable Diffusion is also very memory hungry, especially when training, and while there are workarounds when GPU memory is in short supply, this can have a dramatic impact on performance.
It’s also important to understand that GPU memory must be shared
between applications. It’s not uncommon for architects to use multiple applications at the same time — render in V-Ray while working on real-time experiences in Unreal Engine, for example.
Running out of GPU memory can have a dramatic negative impact on performance, meaning users have to make compromises to the way they want to work.
Some viz workflows will benefit greatly from having more than one NVIDIA RTX GPU in a single workstation. With V-Ray GPU and NVIDIA Omniverse RTX Accurate (Iray), for example, rendering performance scales near linearly across multiple GPUs.
lenovo.com/aec
Of course, it’s still very early days for AI in visualisation and NVIDIA RTX GPUs are already pioneering new technologies which could soon become an important part of viz workflows. This includes Neural Rendering, which can use AI to ‘learn’ how light
Autodesk’s granular data strategy
Autodesk’s new AEC data model API marks the beginning of the transition from monolithic files, such as Revit RVT, to granular object data to open up new opportunities for sharing information and greater insight into projects.
Martyn
Day explores
what this might mean for AEC firms moving forward
In AEC software, the last forty years have been about buying branded tools and creating associated proprietary files. Compatibility has been achieved by basically buying the same software as your supply chain and sharing the same proprietary files.
The next forty years will be all about the data: where it’s stored and how teams can access it. Monolithic desktop solutions will be replaced with discrete cloud-based services. These will provide different ‘snapshots’ of project data (different disciplines, project activities, meta data), which will be accessed through seamless Application Programming Interfaces (APIs).
The advantages will be no longer needing to send data around in big lumps, caching huge files, or cutting models up. Being centrally sourced, collaboration can be built-in from the ground level and granular data opens new data sharing opportunities, with greater insight into projects.
The key issue for the main software industry players, and their customers, is how do they get there? For companies like Autodesk, this is a significant challenge. It’s the market leader by volume, so has a lot to lose if it gets it wrong.
The company is currently developing
Forma as its next generation cloud-based platform for AEC. Today, it may look like just another conceptual tool but there is a lot of engineering work taking place ‘under the hood’. In June, the day before AEC Magazine’s NXT BLD event, Autodesk announced the first downpayment on opening up the RVT file to new levels of granularity with the general availability of its AEC data model API.
The path to granularity
The AEC data model API enables the break-up of monolithic files, such as Revit RVTs and AutoCAD DWGs, into ‘granular object data’ that is managed at a sub-file level in Autodesk Docs on the company’s secured cloud. This data is accessible in real time via Autodesk’s APIs and enable new capabilities.
The API is still in development and, for now, it only accesses metadata, without geometry, but this can be used to build dashboards or access design information that can be tabulated. Geometry will be the next layer of capability
added to the API. At the time of release, Autodesk stated: “Through the AEC data model, we look to deliver a platform that prioritises a transparent and common language of describing AEC data, enabling real time access to this data and ensuring that the right data is available to the right people at the right time.”
Over time, Autodesk will continue to build this capability out, allowing developers to read, write, and extend subsets of models through cloud-based workflows via a single interface. There will not be a need to write custom plug-ins for individual desktop authoring applications like Civil 3D, Revit, Plant3D and other AEC connected design applications. The filebased products will store their designs as files on Autodesk Docs and will be, on demand, ‘granularised’ to meet customers’ requirements. However, in order to make AEC data more accessible / interoperable everything must be restructured (converted) enabling the data to be remapped and connected across AEC disciplines. The company sees that the key delivery of the AEC data model API technology will lead to enhanced support for iterative and distributed workflows.
How Autodesk introduces the AEC data model and API. Slides taken from Sasha Crotty & Virginia Senf’s ‘A new future for AEC data’ presentation at NXT BLD 2024 View the video at www.tinyurl.com/NXTAEC-Autodesk
AEC data model API capabilities
As mentioned earlier, this is just an initial instalment of granular capability. The API allows the querying of key element properties from ‘published’ Revit 2024 and above version models - published meaning that the files are stored on Autodesk Docs.
The AEC data model API exposes these properties through a simple to use GraphQL interface, tailored to the AEC industry. GraphQL is an open-source data query and manipulation language for APIs and a query runtime engine.
Using GraphQL users can access Autodesk Construction Cloud (ACC) accounts, projects and designs and retrieve element and parameter values. It’s possible to retrieve different versions of a design and query for elements at a specific design version. Users can search for elements within a design or across designs within a project or hub using specified search criteria. It’s possible to list all property definitions and query elements based on their properties such as categories (doors, windows, pipes, etc.) or parameter name + Value (area, volume, etc.), materials.
Autodesk expects customers and developers to automate workflows, such as identifying anomalies within designs (quality checking) and locating missing information, comparing differences between designs. It’s possible to generate quantity take offs, schedules, build dashboards and generate reports.
For now, data granulation and viewing the results is free but there are rate limits, based on a point system. Overall, users are allowed 6,000 ‘points’ per minute. Each individual request is limited to 1,000 points per query. If you exceed these requests, Autodesk’s servers will not send you the information. The points system varies on the function - a query return is rated at 10 points, Object info in at 1. There is more information about his online.
Open or walled garden?
In recent times, Autodesk has been making a lot of noise about being open and being a better open citizen. It has licensed the Industry Foundation Classes (IFC) tool kit from the Open Design Alliance (ODA), the most used IFC creation tools, it has created export services on the cloud to translate between different applications, and signed multiple interoperability deals with competitors, such as PTC, Bentley Systems, Trimble and, most recently, Nemetschek.
But why are interoperability agreements required in the first place? If you are open, you are open and surely permission would not be required? A lot of these agreements are about swapping software, perhaps file format libraries but increasingly it’s about rights to have API access.
This is the big issue for cloud. If you move your data to the cloud, where it might be translated into a proprietary database format, the only ways to get access to your data are via file export or API calls.
Today, while it would be great to get RVT, DGN, DWGs out with great file compatibility, in five to ten years’ time, this will be as exciting as getting a DXF. Project information for all disciplines at high levels of granularity will deliver greater collective benefits than relatively dumb files. API access really is at the control of each company and allows firms to wall off their customers’ data to selected developers.
threat you are. This is a kind of openness, but it’s going to be highly conditional and could be revoked at any time.
As all software moves to the cloud the API world is also trying to work out ways of financially rewarding the software firms. It is inevitable that API calls will be charged for. All this software is sat on Amazon AWS or Microsoft Azure instances and all traffic comes with associated micropayments. Software firms are examining models to cover these charges while adding their profit margin. In the case of Autodesk these are wrapped into the cloud credit system, covering functionality and the AWS bill. While the
‘‘
the data model component.
For now, you can use all Autodesk cloud services as you currently use them. Instead of forcing a translation between every Revit file save on the fly, the granularity of files is handled on demand, for those that want to take advantage of it. The API, now and seemingly for quite a while to come, is all about output from Autodesk Docs (viewing, tabulating, querying) as opposed to doing something to the data and sending it back to Autodesk’s servers. This, I guess, is mainly about security.
If you move your data to the cloud, where it might be translated into a proprietary database format, the only ways to get access to your data are via file export or API calls ’’
Autodesk’s cloud-based API, which was called Forge but is now Autodesk Platform Services (APS) comes with terms of usage, once of which, 5.3 simply states, ‘No use by competitors - Except with Autodesk’s prior written consent, you may not access or use the services if you are a competitor of Autodesk.’
For me this seems to be the main reason for the agreements, to give express permission to access customers’ data via APS. But this isn’t given to all: it has to be negotiated and I assume it’s on a quid pro quo basis and probably on how much of a
AEC data model API is currently free, it is throttled with an associated point system. Metering is an important metric for future business models.
Conclusion
Changing the fundamental technology on which your applications and customers have built businesses on is not for the faint hearted. Keeping desktop software sales alive, while re-engineering filebased workflows to ones that are granular is like changing a car tyre at 90 miles an hour. With the release of this AEC data model API, we now have some insight as to how Autodesk will engineer
Being able to view granular info via a simple interface is obviously a huge advantage over file-based workflows, where a lot of the BIM metadata goes to die. In some respects, this is ‘an openness’ here, enabling the viewing of design data held in proprietary files, held in a proprietary database on a paid for service. The fact is you still need to be an ACC subscriber to generate the granular data in the first place.
The most interesting development will be when geometry can also be output with the meta data through the Autodesk AEC data model API. Will the geometry be Universal Scene Description (USD), or will it be like IFC.JS, with all the granular object data?
With IFC.js and open-source products like Speckle, it’s already possible to make data granular right out of the Revit desktop app, without having to pay to use Autodesk’s cloud services. The battle for granular data mobility has started.
■ www.tinyurl.com/AEC-data-model
Watch Autodesk’s NXT BLD 2024 presentation
Virginia Senf
Sasha Crotty
Building bold at Maggie’s cancer centre Case study
Studio Libeskind’s Maggie’s cancer centre at The Royal Free Hospital with its complex geometry, and curved facade with edge columns raking in two directions, demanded an integrated approach for construction
Renowned for both its distinctive architectural design and the exceptional dedication to supporting individuals with cancer, the newest Maggie’s cancer support centre is situated in the grounds of the Royal Free Hospital in London. Showcasing a custom raking curved facade, inclined walls, distinctive timber cladding, and a secluded rooftop garden and pavilion, this diminutive yet architecturally bold structure designed by New Yorkbased Studio Libeskind set the engineers and client team a considerable challenge.
William Hare, the structural steel engineering group, was appointed by construction manager and principal contractor Sir Robert McAlpine Special Projects on behalf of Maggie’s to deliver the project’s structural steelwork.
Speaking about the project, Ivo Garcia, Innovation & BIM Manager at William Hare said: “This project was different right from the outset, not just in terms of its charitable nature but also the structure’s volume, its striking shape and the incredibly collaborative environment curated amongst stakeholders.
degrees, the standard approach would have involved the use of temporary steelwork to support the columns. However, external propping wasn’t possible due to the aforementioned site boundaries, while propping internally wouldn’t have allowed enough space for machinery and access. Despite only being a two-storey structure, the temporary forces would have been akin to an
how it would be fitted, piece by piece.
“This was hugely valuable,” says Garcia. “Effectively, anyone can use Trimble Connect, whether to consult a model or add data to objects in Tekla Structures. You don’t have to be a Tekla user to benefit from it, which made the most difference – it truly helps to break down barriers between departments.
“The interesting geometry and shape of the structure, featuring a series of curved conical volumes, was a direct result of the site’s small footprint and challenging constraints, which included neighbouring retaining walls. In order to deliver the desired square footage, the decision was made to design a sloped façade, enabling the building to expand as it rises.
“These same site constraints also posed a challenge to us in terms of site delivery and installation. With some of the structural columns leaning up to 45
eight-storey building as a result of the geometry, requiring a considerable amount of temporary steelwork.”
From design to installation
In order to work around this, William Hare had to plan and detail an incredibly prescriptive method of install, which drove the project from start to finish.
The structural model was initially created in SAP2000 alongside Tekla Structures, before being pushed into cloud-based collaboration platform Trimble Connect, where the 40-step install sequence was animated to show
“The cloud-based platform enabled us to overcome the logistical challenges and clearly demonstrate and communicate our proposed strategy, all of which contributed to a smooth construction sequence, loading strategy and on-site erection. For example, we could easily break down the lorry load numbers and detail the individual component install sequence within each load, all colour-coded and clearly visible. Using Trimble Connect, we were able to make this same information readily available to engineers, detailers and the onsite team. It provided a great means of successfully managing the project, adding intelligence to the model and offering an enhanced method of communication.”
From BIM model to field
Pushing the capabilities of Trimble Connect further, William Hare also made use of Trimble Connect AR and Trimble Field Link to drive data from the model out into the field.
“The challenging structural geometry and complex installation sequence really demanded this digital workflow,” explains Garcia. “Adopting the streamlined approach offered by Trimble Field Link just makes sense, especially if you are already using other Tekla software solutions – it’s the same ecosystem, with the same data flowing seamlessly from
Structural model of complex steelwork
office to field. Having the model data readily available, whether you’re in the office or out on site was invaluable.
“Ordinarily, there is the potential for time to be wasted when problems are found or doubts arise, with people having to travel from site to office or spend time on the phone with a member of the detailing or engineering team. When all the information is locked within a system and a skillset that not everyone has access to, it can bring inefficiencies. If we can make curated data readily available to the site team this can all be avoided, providing teams with the context, the data and the means to action it.
“We had around 25 Trimble Connect users on the project. Outside of William Hare’s project and engineering teams, cli-
ent representatives and their designers and architects, erection subcontractors and our production planning team all had access to the project in Trimble Connect. It really was at the centre of it all.
“While communication with other stakeholders on this project was primarily in 3D, with the preferred file format being IFC, this is sadly still not the norm. Here, the complex geometry made it essential and was possible due to the enhanced collaboration fostered within the delivery team.
“Using 3D continues to have its complications and barriers – while it can be used extensively for engineering, it is still often followed by drawings, with these drawings being what ‘rule’.
Confidence is needed on an industry-level that 3D is the best tool for communicating information on every level, from contractors to clients to developers. We need to embrace it and find the necessary framework to deliver it.”
Maggie’s at the Royal Free Hospital was officially opened in January 2024, with William Hare’s work on the project recognised in the 2024 UK Tekla Awards, winning the ‘Public Project’ category.
Garcia concluded: “The Maggie’s cancer support centre project was a special one for William Hare as it was charitable in nature; having the opportunity to contribute so strongly to the support of anyone with cancer or their families has been described as ‘an invaluable opportunity’ for the team involved.”
■ www.tekla.com/uk
The interesting form of Maggie’s cancer centre was a direct result of the site’s small footprint and challenging constraints
Maggie’s building in context with Trimble cloud survey data
Studio Libeskind rendering
PHOTO:
THE FUTURE SPECIAL
THE FUTURE OF AEC SOFTWARE SPECIAL REPORT
Foreword
Anybody working in design technology can sympathise with the field’s simultaneously fast and slow pace. On the one hand, we see topics such as AI and digital twins presenting new possibilities. On the other hand, we have our day-to-day tools. Whilst object-oriented parametric modelling may have been an innovation a decade ago, the general AEC software estate has remained relatively stagnant in terms of efficiency and innovation. When considering this in the broader context of our industry’s challenges – our need to be more climate-responsible, safety-conscious, and profitable against increasingly shrinking margins – something has to change. This field of ours has a unique role to play in sidestepping commercial competition and driving for change. This is how the Future AEC Software Specification came about and the spirit upon which it was developed. While architects, engineers and constructors directly compete, the design technology leadership of these firms have developed a culture of open collaboration, sharing their experiences with peers for bidirectional benefits and enabling the broader industry. These groups had instigated various initiatives, including direct feedback to vendors, feature working groups, industry group discussions, etc.
At the height of the pandemic, many found the opportunity to step back and evaluate the tools they use to deliver against the increasing investments they’re making. A consensus of views about the value proposition resulted in the development of an Open Letter to Autodesk (www.tinyurl.com/AEC-openletter)
Contributors to the Open Letter, many putting their name to it, others wishing to remain anonymous, aimed that this would encourage further development in the tools we use to design and deliver projects.
In the years since, we have received engagement from further national and international communities. As a group, we have reconvened to reflect on whether the original objective was achieved. A limiting barrier to that objective was its direction towards a single vendor, not emerging
startups and broader vendors that could develop faster and quickly embrace modern technology’s benefits (cloud / GPU / AI, etc.). With that in mind, we pivoted from a reactive, single-vendor dialogue to a more proactive open call to the software industry.
Future AEC Software Specification
We aimed to set out an open-source specification for future design tools that facilitates good design and construction by enabling creative practice and supporting the production of construction-ready data. The specification envisages an ecosystem of tools that are best in class at what they do, overlaid on a unifying “data framework” to enable efficient collaboration with design, construction, and supply chain partners. Once drafted, the specification was presented and agreed upon as applicable by both national and international peers, with overwhelming support that it tackled the full breadth of our challenges and aspirations. This specification has coalesced around ten key features of what the future of design tools needs:
• A universal data framework that all AEC software platforms can read from and write to, allowing a more transparent and efficient exchange of data between platforms and parties.
• Access to live data, geometry, and documentation beyond the restrictions of current desktop-based practices.
• Design and evaluate decisions in realtime, at any scale.
• User-friendly, efficient, versatile and intuitive.
• Efficient modelling with increasing accuracy, flexibility, detail and intelligence.
• Enables responsible design.
• Enables DfMA and MMC approaches.
• Facilitates automation and the ability to leverage AI at scale and responsibly as and when possible.
• Aids efficient and intelligent deliverables.
• Better value is achieved through improved access, data, and licensing models.
Since launched and presented at NXT DEV in June 2023 (www,tinyurl.com/nxtdev-23),
there has been a vast range of engagements by additional interested design, engineering and construction user groups, resulting in expansions and tweaks to the originally written spec.
In summary
• BuildingSmart has confirmed that IFC5 is in active development, sharing the structure and end goals presented within the specification’s chapter: Data Framework (cloud-centric) schema for vendor-neutral information collaboration as a transaction.
• The group has engaged with software developers, startups and venture capitalists to better discuss the specification and how it relates to their current tools, roadmaps acquisitions and investments.
• Some of the major software houses have opened their doors and invited specification authors to sit on advisory boards, attend previews and advise on aligning with the specification.
• The specification, structured around its ten core tenets, is being trialled as a means of assessing software offerings as part of an AEC software marketplace.
• Internationally, the specification has proven influential in significant research projects such as Arch_Manu in Australia, providing base reading material and a guiding framework for a programme supporting five PhD candidates over the next five years.
• Leaning into the intention that the Data Framework should be industry-led, we are actively exploring opportunities for how this can be structured and move forward meaningfully.
• We are connecting with like-minded parties in the US, Europe, Asia, and Australia, who are undertaking their own developments and experiments using a common approach to data. We believe the specification can be a rallying point for all of these initiatives to work together rather than apart.
In this AEC Magazine Special Report we explore the tenets of the Future AEC Software Specification
■ www.future-aec-software-specification.com
By Aaron Perry, head of digital design at Allford Hall Monaghan Morris (right) and Andy Watts, director of design technology at Grimshaw (left)
A framework for data
The premise is simple: every time we import and export between software using proprietary file formats and structures, it is lose - lose. We’re losing geometry, data, time, energy, sanity.
In AEC, some design firms have just accepted the path of least resistance and avoid file exchanges as much as possible by accepting a single tool to do a little bit of everything in an average way. A few of the larger firms use whatever the best tools for the job are. However, they are paying for it by building custom workflows, buying talent, and building conversion software. But that doesn’t change the fundamental problem. The data is still ‘locked’ and only readable to a proprietary file type.
So, what would the solution look like? Let’s make a comparison to Universal Scene Description (USD), with an extremely simplified comparison. In the media and entertainment world, Disney Pixar said enough is enough as the largest market share. We’re losing too much time translating geometry, materials and data through different tools and software file types, so they created an open-source USD. Like AEC, in media and entertainment, they need to develop 3D models in flexible modelling software, calculate and build lighting models, spaces, realistic materials, movement, and even extra effects like fur, water, fire.. All have a specific modeller best in class at that specific output.
With USD, a common data structure sits outside of the software product. Each software vendor was ‘forced’ to adopt and read USD data layers, variants and classes and would read and write back to the USD. No data loss is associated with
importing or exporting because there is no transformation or translation. There were tons of additional benefits too, such as opening times (because you’re not actually opening all the data), which meant sending a job to the render farm was almost instant and didn’t require massive packaging of model files. Collaboration was massively improved with sublayers’ checked out’ by relevant departments or people all working concurrently. Its standardisation also created a level playing field for developers, who no longer had to build many versions of their tools as plugins for different modellers. If it can work with USD, it can work with everyone.
So, does all this sound fantastic and somewhat transferrable? So, where could this be adapted or improved for AEC? There’s a much longer version as to why USD isn’t adaptable for AEC, and the reality is that USD is technically still a desktop file format with variants like .usda, usdz or usdc. It’s the same answer for . IFC, which is a mature and relevant dictionary/library of industry specific terms/ categories, but still offline file format.
When we think of our Future AEC software specification, it sets out that data needs to move outside of formats tied to desktop software products, the complete opposite of where we are today.
The proposal?
Within the tools we use (whichever is the best tool for the problem), the software reads and writes to a cloud-enabled data framework to get the information required.
That could be at one end of the scale, the primary architect or designer need-
‘‘
ing absolutely everything, heavily modelling on a desktop device, or at the other end, a stakeholder who only needs access to a single entity or component, not the entire file (see part 2).
For absolute clarity, we have cloudbased file hosting systems today, which don’t expose a granular data parameter level for each entity and component. Working outside file formats enables a more diverse audience of stakeholders to access, author and modify parameters concurrently, only interacting with the data they need, not a whole model. Collaborators can benefit from robust git-style co-authoring and commits of information with full permissions, audit trail and acceptance process.
Moving away from file formats and having the centralised data framework enables local data to be committed to the centralised web data, allowing an entire supply chain to access different aspects of the whole project without losing geometry, data or time associated with file type translations.
As an architect, when I have finished a package of information, it is set for acceptance by another party, who then continues its authorship. This is gamechanging and can hugely influence decision-making efficiency in construction projects. A granular Data Framework enables engagement, collaboration of data, diversification of stakeholders, and hybrid engagements from a much broader range of people with access to different technologies. It creates an equitable engagement for an audience not limited by expensive, elitist, complicated tools with high barriers of entry.
Moving away from file formats and having the centralised data framework enables local data to be committed to the centralised web data, allowing an entire supply chain to access different aspects of the whole project without losing geometry, data or time associated with file type translations
DATA FRAMEWORK
Where are we today?
Today, our tools connect through intensive prep, cleaning, exporting, and importing, even within tools made by the same vendor.
Some design firms avoid file exchanges as much as possible by accepting a single tool.
Larger or specialist firms use whatever is the best tool for the job, but they are paying for it by building custom workflows and conversion software.
In any event, vendor locked proprietary formats and the collective energy wasted by our industry tackling this must change.
What’s needed?
• Entity component system, not proprietary file formats.
• Git-style collaboration and commits of local data to the cloud data framework
• Shared ownership of authoring –transactional acceptance between stakeholders.
• An industry alignment of Greg Schleusner’s work with StrangeMatter (www.tinyurl.com/magnetar-strange-matter), the direction of IFC 5 from buildingSmart (www.tinyurl.com/IFC5-video)
and the Autodesk AEC Data Model (www.tinyurl.com/Autodesk-AEC-data).
Final thoughts
Since July 2023, this has been the most active section of the specification, with frequent and consistent gains to connect thoughts and active development. We’ve been meeting with vendors, users, industry experts, Building Smart and many more. We have the right people and skills to solve this problem. It’s not impossible, and it won’t happen overnight.
Access across Use Cases, Personas and Locations 2
Depending on the task, we should choose the right hardware, to access the right tool, on a single stream of data
Twenty years ago, the design and construction industry looked very different.
We produced paper drawings, rolled them in cardboard tubes and cycled them across town to our collaborators.
Today, we upload thousands of PDFs and large, proprietary models to cloudhosted storage systems. It’s a more digitised exchange, but we’re still creating and developing that content locally.
Ten years ago:
Design studios and engineers operated in a reasonably consistent but traditional way. Each business would internally collaborate in the same building, sitting across from each other, with the same desktop machines, software builds, and versions, waiting 10 minutes to open models and more minutes each time they synchronised their changes. Sound familiar?
Today?
To recycle a phrase I stole from Andy Watts (www.linkedin.com/in/andy-r-watts) during the pandemic and as people needed to work from home, our office of five hundred people became five hundred separate offices.
In 2024, every design studio, engineering firm, and construction company will operate with a hybrid workforce between various offices, at their homes and increasingly with a global footprint. We’re not going to revert to office-only working.
Yet, with the software we need to use to deliver the scale and complexity of projects, we’re tied back to our office desktop computing and the infrastructure connecting them. I often reflect on an interaction
with a young architect from a couple of years ago, whose puzzled face still troubles me today, as I still don’t have a great answer to his conundrum: They have firsthand experience of the challenge in collaborating with colleagues, on the same spec hardware next to each other, with struggling performance. Yet outside of work, their benchmark is effortlessly engaging with a global group of 150+ strangers in a dataset the size of a city, with the intricate details of a window, across a mix of hardware specifications, in a game.. and confusingly returning to the office the next day for a significantly worse experience.
What’s needed?
Today, our data and energy are locked into inaccessible proprietary file formats, accessed only by expensive hardware, complex desktop software, and traditional ways of working collaboratively.
In reducing the barriers of entry (Desktop Hardware, On-prem infrastructure, and high-cost monolithic licence structures), we increase the opportunity for interaction by a broader, more equita-
‘‘
ble range of stakeholders. Each has the opportunity to engage with a project’s design, construction and operational data. Depending on the task and level of interaction or change, we should be free to choose the right hardware to access the right tool.
Within the tools we use (whichever is the best tool for the problem), the software should read and write to a cloudenabled data framework (see part 1) to get the required information.
Software should appropriately expose relevant tool sets and functionality for data interaction based on our chosen hardware:
• Lightweight apps for quick review, measuring or basic changes
• Desktop or cloud-augmented tools for heavy modelling, significant geometry changes, mass data changes or processing etc.
Importantly, in either case, having access to interact with a consistently updated, singular data source (see part 1) . All are live, not via copies or versions.
In 2024, every design studio, engineering firm, and construction company will operate with a hybrid workforce between various offices, at their homes and increasingly with a global footprint. We’re not going to revert back to office-only working
Call of Duty Warzone –150x people connected across varied hardware, in a city-scale dataset
Designing at Context and Scale
We accept that for a small set of design firms, the current design tools on the market are, for the most part, capable of delivering production models, drawings and deliverables for their projects.
However, for many who’ve helped put together this specification, our Mondays through Fridays deal with much more than that. Intricate, challenging, troublesome projects. From office buildings in the historic city to newly proposed neighbourhoods, even tall buildings, airports, stadiums and more.
We get it. Software developers must balance features and performance for most of their use base (good market fit) against the needs of the few (the 4 in 10 customers, though they often represent a larger overall quantity of users).
As a collective, we fundamentally believe that the vast majority of the tools we use today can barely support the projects delivered by most firms. Take a square mile of London around Bank, with a mix of intricate heritage buildings and modern tall glass icons. Add to that the buildings in between, the glue. These built assets need constant refreshes and attention to support adaptability and appropriate longevity.
Designers are struggling to deliver these projects efficiently.
To accommodate the required level of detail that we need to deliver both digital
models and the construction documentation, we’re forced to take complex technical strategies and workarounds to subdivide and fragment models by purpose or volume to try and collate the whole picture. This approach results in teams no longer working on a holistic design but rather a series of smaller project sections, making it harder to affect project-wide changes when necessary, ultimately leading to unintended errors. If the tools at our disposal today had a strong relationship structure to allow proxies between assemblies and construction packages, this could be avoided.
Workarounds significantly limit creative flow and disrupt updating design and data alignment between models, even leading to inconsistency.
Technical barrier?
Is there a technical barrier preventing any design software from dealing with the range and scale of data? From the detail of the sharp edges of a fire sprinkler to zooming out and seeing an entire urban block?
It’s tough not to make yet more comparisons to the software used in game development, with their level of streaming, proxies, and Nanite technologies, and to get deeply jealous. We’re questioning our ability to model anything smaller than 100mm, whilst they’re having conversations like “Let’s model sand and dust particles.”
In recent years, increased access to real-
time rendering engines and using them on top of design software has enabled designers to receive instantaneous feedback whilst designing and immediately understanding the impact of their decisions on lighting, material, and space/volume without waiting days for the response.
Beyond visualisation, what about understanding the performance of space for things like natural light, climate, and environmental comfort? It’s not uncommon for many to share models that need to be rebuilt and evaluated. Those delays make it extremely difficult to implement changes or tweaks to the design to accommodate the results of these analyses (if at all still possible).
Analysis or review of any kind should not be a 4-day exercise to move data models between tools so that we as designers can make more informed decisions whilst designing.
The Data Framework (see part 1) is key to supporting software and tools connecting to the latest design data without proprietary file format chaos. Maximising the
Unreal Engine 5 Nanite Technology. Courtesy of Epic Games
New or old, our tools need to help us adapt the built environment for increased longevity
‘‘ Workarounds significantly limit creative flow and disrupt updating design and data alignment between models, even leading to inconsistency ’’
benefits of cloud computing would reduce the complex strategies we take to collate design information, make widespread changes to data and better connect with our design and construction partners.
Where are we now?
In summary, we’re struggling to deliver holistic design at both a product level and whilst considering the full construction model or the broader context of a collection of buildings.
Many of the projects we’ll all be working on in the future will include complex retention of existing building fabric, which isn’t easy with the tools we have today, designed for small orthogonal
new-build buildings. More on this later.
We’re prohibited by tools that cannot manage the scale of data or connect the relevant representations of the overall scheme; we’re left waiting to see what might happen when we formally visualise or analyse our design.
Delays and disconnect between design and analysis reduce the ability to make changes to improve the quality of the design and the comfort of occupants.
What we need?
Going forward, we have to close that loop.
We need modern, performant software that enables designers to interact with the full scale of data associated with challeng-
ing construction projects, including the ability to make widespread changes to design in efficient ways.
We need tools that can represent the complete details of design and construction models with the functionality to assess, analyse, adapt, and mass manipulate changes. This would span from individual components to rooms, apartments, building floors, construction packages, and whole buildings in a master plan.
The data framework enables designers to understand the consequences of their decisions in ‘close-to-real-time’ and can make changes more efficiently to improve the quality and lifespan of the built environment.
User Experience 4
The design tools we predominantly use today are not from this era.
Code prompts, command lines, patterns of key presses, and dense user interfaces of small buttons aren’t exactly friendly to those not raised on CAD tools in the 90s. These challenge artists, designers, and creatives to convert their ideas into geometry that adequately conveys and communicates their intent and vision. Most of our tools today have their roots in a previous generation and, while they have improved over time, have retained a UI structure unintuitive to new users, who clearly can’t know where every button is or know every command line. The thick software manual they handed out when the product was released is an excellent example of the time investment needed to use the tools.
Most designers, even those very adept with complex software, flag their tools today as a wrestling match where the mental load to operate prevents them from thinking about design.
That becomes even trickier when we take into account how modern software updates. In our businesses, Microsoft Teams updates every other day or so, moving buttons around, which can be frustrating. With design software able to move away from annual release cycles, we’ll see this in AEC too.
That doesn’t mean don’t innovate. It just needs to come with clear communication on new functionality. We’ll work on growing agile users who become more expectant and comfortable with constant changes across all their tools.
What about speed?
We’re not talking about performance; we’ve covered that in part 2. But the ease/ level of obstructions in moving from idea conception to appropriately conveying modelled design. A good design tool can achieve this in an ‘appropriate’ time with structured data and useful geometry for others. And I’m also talking about the overall time to complete relevant outputs
‘‘
navigate and find what’s needed, even for experienced and well-versed users.
Our designers battle tools to get their ideas converted into digital space.
What we need?
Design software needs to be user-friendly, efficient, versatile and intuitive.
Code prompts, command lines, patterns of key presses, and dense user interfaces of small buttons aren’t exactly friendly to those not raised on CAD tools in the 90s
Aligned with the Data Framework (see part 1), Users will be accessing design and construction data across different devices based on their persona with the type of interaction they need. Though a familiar experience is important across any interaction people will have with their tools. We recognise there will be relevant limits in functionality based on the device.
needed for an issue or to collaborate with others. A good design tool can swiftly support development and changes by coordinating and exchanging with others.
Where are we now?
We’re still using tools with the experience and interface of CAD products from the late 90s.
Fragmented software, with panels and windows of dense, small icons reflecting tools and functions. These are difficult to
Clean interfaces that utilise advancements in language models to interpret plain language instruction and requests, including navigation of complex tools, e.g. ‘Isolate all doors with a 30-minute fire rating..’ or ‘Adjust all Steel columns on 5th floor, from type X to Y..’ More on Automation in part 8
Tools need to offer a clean and friendly experience that enables designers to translate ideas to digital deliverables quickly. Different personas will interact and access data across different devices, which must feel familiar.
Provide accessible feedback. If a command, tool or code prompt doesn’t work, why not? Help the user understand what they may need to do to unlock this.
Construction software that gives you the flexibility to build fearlessly.
Bluebeam construction software empowers you to take full control of projects and workflows, with customisable tools specially designed for how you work.
Whatever your vision, we’ll help you see it through.
bluebeam.com/uk/be-more
Modelling Capabilities
I’ll start by making a bold statement that might annoy a few people in design studios when the drawing board was king.
The transition between the drawing board and CAD was ‘easier’ than the move to 3D object modelling.
In reality, we replaced drawing lines on paper with drawing lines in software.
The more challenging move was from 2D line work to 3D object-based modelling, and plenty of people in roles similar to mine with the scars to prove it. It was a much more significant challenge and mindset change for most designers, who had to adjust from only drawing what they wanted to see to getting everything and then working out what they did not want to show. Being presented with everything and dialling back has created a laxness or lack of attentiveness to correct or manually adjust drawings, leading to many familiar comments about a ‘worsening’ in drawing quality.
Aside from drawing development (and more in part 9 deliverables), there have also been changes in the modelling approach we take within tools. At one end of the scale, we have tools focusing on surface, point, plane, and line modelling, with complete flexibility and creative control. At the other end of the scale, we have tools driven by objectbased modelling with more controlled parametric-driven design. Unfortunately, the tools we use in our industry are either at one end or the other and fail to recognise the need for a hybrid of the two.
We approve of the consistency and structure that object modelling brings to a project, but the types of buildings most people work on require greater flexibility. Design features within these proposals need nuanced modelling tools to cut, push, pull, carve, wrap, bend, and flexibly connect with the parts and libraries of fixed content.
Beyond the early stages?
It is also a misconception that the requirement for creative modelling is limited to the early design stages of a project. Sure, at early conceptual stages, we need design tools with flexible, easy modelling functions, and yes, later in a project’s design development, we need rigid delivery software through production. But the reality is that we always need both; a structured classification system of modelled elements with consistent components whilst being able to adapt edge cases to support unique or historical design creatively. Design does not stop, and the need for flexibility does not stop just when we start production documentation.
Tools that understand construction? Many of the design tools that we use today do not have any intrinsic understanding of ‘construction’, or ‘packages’ of information for tender or construction. The plane/point-based modellers are just as valuable for designing items of jewellery as they are for a building’s design and construction.
At the other end of the scale, there are object-based modellers with basic classification systems loosely based on construction packages. But these classifications are not smart. They have no intelligence or understanding of why a door is not a window, or a roof is not a floor. As a specific example, adjusting the location of a hosted door in a wall should prefer to correlate/snap to internal walls not internal furniture (because interior layouts are unlikely to influence the locations of doors). When using our current platform of design tools to help encapsulate our design ideas, we are not doing so with any underlying building intelligence or construction parameters. Architectural design often challenges the boundaries of form and function. What if a building’s geometry causes a modelled element to perform as both a floor, a wall, and
an arch? Having the flexibility to customise and flexibly classify elements is important beyond a basic construction. Future tools should have genuine construction intelligence from a building down through assemblies to individual elements. These project elements should host data relating to assigned identifiers, key attributes, relationships of constraints, and associated outcomes or performance. The Data Framework (see part 1) supports this by recognising Entity Component Systems (ECS). You can find an excellent summary of how this would work by Greg Schleusner on Strange Matter (www.tinyurl.com/magnetar-strange-matter). Software developed with this relational understanding of construction and parts enables a more intelligent future of automating the resolution of connecting and interacting parts like a floor to a wall and repeating elements and information on mass. More on Automation in part 8
Where are we now?
In summary, today, we are limited to tools at either end of the spectrum of function—flexible tools without structure or structured tools without flexibility. Neither understand the basic principles of how buildings come together.
What does the industry need?
Software should enable design, not control it through its limitations. There are key modelling functionalities that enable good design software:
Accuracy - Modelling tools should be an accurate reflection of construction. They should allow design teams to create models that have an appropriate level of accuracy depending on scale (see part 3) , project stage and proposed construction.
Flexibility - Project teams should not have to decide between geometric flexibility, complexity, or object-based modelling. Future architectural design software should strive to embrace and connect the two approaches, offering an environment where designers can nimbly navigate between the two. Transforming a geometric study into a comprehensive, data-rich
Modelling Capabilities (continued)
building model should be a fluid progression, reducing the risk of information loss and eliminating the need for time-consuming geometric data processing. This would involve intelligent conversion systems that interpret and transition geometric models into object-oriented ones, retaining original design intent while proposing appropriate object classifications and data enrichment.
Level of detail - Future modelling tools should be able to handle increasing levels
‘‘
The transition between the drawing board and CAD was ‘easier’ than the move to 3D object modelling. In reality, we replaced drawing lines on paper with drawing lines in software ’’
of detail without sacrificing performance or efficiency. Teams should also be able to cycle through various levels of detail depending on a specific use case.
Intelligent modelling - A new generation of tools needs to push beyond the current standard of object-based modelling. Project elements — from whole buildings, down through assemblies, to individual elements — should host data relating to assigned identities, key attributes, constraints relationships, and associated processes. This
embedded data should be based on realworld information, reflecting the knowledge and information that the AEC operates with and the appreciation for emerging methods of construction. More to follow in part 7 on MMC and DfMA
Software developed with this balance and understanding of basic construction principles can also benefit from intelligence and efficiency gains, such as resolving basic modelling connections/ details and amending repeating elements and data/ information en masse.
Real-time photoreal archviz at your fingertips.
Combined with the power of real-time GPU rendering and a seamless integration to Chaos Vantage, Chaos is thrilled to announce the launch of Corona 12, designed to revolutionize how AEC professionals explore, create and present their designs.
Experience the full potential of Corona with:
Streamlined efficiency: Seamless integration with Chaos Vantage enables instant scene exploration and rapid rendering, enhanced with GPU ray tracing in real-time.
Enhanced control: Virtual Frame Buffer 2 (VFB) lets you manage multiple LightMix setups in one place, use A/B comparison tools, bloom/glare calculation deferment, edit .cxr files instantly, and more.
Limitless creativity: Tools like Chaos Scatter, Curved Decals, Corona Pattern improvements, low sun angles in Corona Sky, Density Parameter for clouds, and Corona Material Library accessibility, offer additional ways to elevate your creative work.
Join our webinar, “Explore beyond limits with Corona 12 & Vantage,” where you’ll see the seamless integration in action, and be inspired by the results achieved by Luis Inciarte and Robin Walker from Narrativ studio. Visit chaos.com/webinars/corona-12
Try the new Corona-Vantage connection yourself, visit the Chaos Corona and Chaos Vantage trial pages and download a 30-day free trial for both products.
Responsible Design
We are already living through disturbing weather patterns due to a changing climate. The years to come will challenge the occupational comfort of those in our existing building stock as much as those planning to live in the buildings we are designing. Despite the obvious challenges ahead, the tools we currently use to design and develop the urban environment offer little to no native functionality that provides feedback relating to building performance.
As hinted at in Part 3 – Designing in context and at scale, understanding the performance of a design whilst designing is the most effective way to incorporate change and minimise adverse effects.
As sustainability and the drive for net zero carbon have moved up the priority list of the AEC industry, we have seen various platforms appear to help us understand the carbon load of our designs through life cycle analysis
(LCA). Whilst initially limited to smaller, more nimble and reactive software developers, we finally see intent in the more developed tools we use.
However, these tools all make the same assumption: that every project starts with a blank screen. For most projects we’re working on today, the question isn’t about starting from scratch; it’s about how much of the existing building fabric we can save.
The most sustainable new building in the world is the one you don’t build.
The default approach to entirely demolish and rebuild new has no future. Most projects we design today are ‘retrofit first’, reinventing the buildings we already have. In 2023, to have any genuine conversation with local authorities and (for the right reasons), we’re spending significant time analysing how much existing building fabric can be retained vs. replacing appropriate material with a more efficient structure providing more comfortable and efficient spaces.
This analysis can take weeks and is a delicate balance that no design software tools support or understand. Carefully considering a balance of commitments to the environment with future occupant comfort in a changing climate, the retrofitting of existing buildings is not a trend that is going away.
What is the role of design software here?
As designers, we’re arguably the most responsible party for the impact of new construction on the environment, economy and society. With our broader design and construction teams, we’re making decisions about the merits of the existing building, its fabric and what is still perfectly viable to retain and reuse.
And this doesn’t start with a blank screen in our software.
Tools should effortlessly incorporate and provide feature detection and systems selection from point cloud, existing survey models, LiDAR and photogram -
Responsible Design (continued)
metry data to enable designers to utilise the existing building data from day one. With our designers spending weeks making assessments and viability studies on the retention vs new construction materials, we need tools to simplify and support our analysis of material quantification, embodied carbon calculators, and thermal performances within spaces. Today, most firms have built their own tools, like embodied carbon calculators, in the absence of the capability of the tools we use. This should be equitable across the sector, not limited to the few.
As we determine the appropriate strategy and balance between retention and longer-term operational efficiency, design tools will equally need to support wider studies such as operation energy, climatic design, water, biodiversity, biophilia, occupant health and well-being, their impact on amenity and their community. We’re not seeing any design tools with that on their agenda.
Where are we now?
In summary, and quite bluntly, nowhere. We’re building our own tools, duplicating the efforts of our peers in the absence of these being accessible to the design software we use, emphasising the need for a data framework. We’re desperately struggling to tackle existing building data and use it intelligently to interact with/slice up existing buildings further limited by modelling capabilities. When we want to complete any level of analysis to gain intelligence, we’re remodelling in each tool.
What the industry needs?
Software that enables responsible design. Future design software should, by default, have a basic understanding of materials and fabric and the ability to provide real-time feedback to ensure that a design is aligned with predetermined performance indicators across comfort, environmental impact, social performance, and economic considerations. An
environment should be enabled that is free from siloed or disconnected analysis work-streams and models: analysis should happen in parallel, on the fly, as opposed to having to export certain model formats for analysis in a data cul-de-sac.
Specifically relating to environmental impact, future design software should understand, by default, existing buildings and how retention and refurbishment designs can be facilitated.
These tools should effortlessly incorporate and provide feature detection and systems selection from point clouds, existing survey models, LiDAR and photogrammetry data to enable designers from project inception to utilise the existing building data. Tools need to be able to accommodate the imperfections of existing buildings.
In any event, we need these best-in-class analysis and assessment tools to interpret models and data from our Data Framework (see part 1), where the model may originate from another tool completely.
‘‘ The most sustainable new building in the world is the one you don’t build. The default approach to entirely demolish and rebuild new has no future
The Future of Software is Open
At Bentley, we believe that data and AI are powerful tools that can transform infrastructure design, construction, and operations. Software must be open and interoperable so data, processes, and ideas can flow freely across your ecosystem and the infrastructure lifecycle. That’s why we support open standards and an open platform for infrastructure digital twins.
Leverage your data to its fullest potential. Learn more at bentley.com.
Modern Construction (MMC / DfMA / Modular / Offsite)
Current AEC software is, for the most part, developed around traditional methods of design, construction and delivery. In 2024, we are struggling against both our and our client’s aspirations to deliver a design using modern methods of construction (MMC) or Design for Manufacture and Assembly (DfMA) approaches. We want to work in this way, but we’re hampered by our tools.
Construction approaches have evolved in the last five to ten years. Many construction efficiencies, site planning, communication, quality control and safety initiatives have enabled MMC approaches, such as offsite and modular, to deliver construction. The volumes achievable through safe and high-quality ‘factory’ construction is an inevitable, unstoppable, and increasing requirement that our design tools simply do not align with out of the box.
For the majority of firms who helped author this specification, there is an increasing number of projects looking to achieve more certainty and confidence prior to construction.
Across Design for Manufacture, Modular and Offsite Construction, there are constraints that we, as designers, know we must work to. For example, a prefab unit’s maximum volume/size to fit on the back of an unescorted lorry through Central London. The units can then be joined together to make a 2-bedroom apartment. Giant Lego, if you will.
However, two key issues are present in 99% of the tools we use today. Firstly, we cannot easily set out any
critical dimensional constraints when conceptually massing an early-stage design (associated with DfMA, Modular or Offsite construction). We’ve seen new tools come to market in the last 18 months, fighting for the same small piece of the pie in early-stage massing explorations and viability studies. Without the ability to define design constraints for modular or prefabrication intelligence whilst designing and setting out these buildings, these new tools are unfit for purpose for many architects.
In the absence of construction intelligence, we’re going to keep walking into problems.
Secondly, the design delivery tools we use today cannot interface with construction/manufacturing levels of detail. This means we can’t fully coordinate a prefabricated assembly with the overall building design. Let me repeat that. Today, in our current design delivery tools, we cannot coordinate a fabrication level of detail assembly with the overall building design whilst designing. Primarily, this is related to the capability of performance and scale, which we covered in part 3
You will frequently see the construction media share dramatic headlines about the collapse of another modular/ factory-built offsite company and them going bankrupt. I know it is a complicated issue with many influencing factors, but if we cannot coordinate manufacturing level of detail and data with our design models, that has to be a big factor, which is why it isn’t going smoothly. Implementing changes later in the pro-
ject is always challenging, especially at the cost of either design quality or profitability, to modularise packages. Or both.
Where are we now?
We’re designing with no modular intelligence whilst developing models, using tools that don’t understand how buildings are built.
And the tools we’re using for delivery cannot coordinate fabrication levels of detail with our design intent.
What the industry needs?
Software that enables DfMA and MMC approaches.
To enable MMC and DfMA through our software, we require tools that meet the criteria across design, construction, and delivery:
Design - The next generation of design tools must be flexible enough to support modern construction methods and an evolving construction pipeline. DfMA and MMC approaches, such as volumetric design and kits of parts, are developed by rules. Software should understand this and facilitate rules-based design. As covered in the Modelling section (part 5) , this doesn’t mean the software knows geographic/regional limitations, construction performance limitations or limiting to a fixed/locked system, but provides the platform for users to define these rules.
These tools are essential for real-time design and system validation, optimisation and coordination. These future platforms should manage scale, repetition,
‘‘ For the majority of firms who helped author this specification, there is an increasing number of projects looking to achieve more certainty and confidence prior to construction
and complexity, the hallmarks of DfMA. Collaboration with fabricators and contractors should not require reworking models to increase the level of detailing.
Construction and delivery - Future AEC tools should enable design teams and contractors to work together in the same environment, drawing upon intelligence and input from both to produce a true DfMA process. For this, these tools should understand modern methods of construction by default. Whether it be a modular kitchen, pod bathroom or apart-
ment building by modules, design tools must support fabrication levels of detail across an entire building and incorporate construction intelligence.
This is possible. Industry and software developers have built configurators of industrialised construction systems, aware of how products can be applied to designs based on the parameters and production logic defined by the manufacturers. This isn’t easily accessible within our current design software. However, it could be closer through the ‘Data framework’ (see part 1) and with performant and
scalable tools (see part 3). We are also aware that construction and delivery methods will be constantly evolving to stay “modern”. Our tool ecosystem needs to do the same by continuously staying relevant and in touch with the actual work of the industry.
Future tools should allow project teams to define key design parameters and constraints for compliance, regulatory assessment and design qualification. These are not necessarily set values but an ability within the data framework to set parameters.
Automation and Intelligence
Most industries leverage automation and machinelearned intelligence to support decision-making, reduce repetitive tasks, increase quality, and boost efficiency. But what about architecture, engineering, manufacturing, and construction, and in the context of this series, what about the tools we use? Firstly, let’s contextualise against familiar types of automation and AI:
Scripting - Most references to AI today are confusions for scripting. The most common form of automation is basic scripts that run and perform automated tasks to resolve a repetitive process. Usually, to perform a function or provide a solution, the base software we use is unable to do by itself. No creative thinking, out-of-the-box assessment or suggestions, just following a defined process to solve a problem.
Large language models (LLM) - As a result of big tech scanning every page of every book, sensible associations of words and languages created LLMs. These apply to most industries. Whilst this is significantly accessible for most industries, the niche vocabulary and specific terminology of design and construction result in reduced value. Additionally, these tools are often averse to specificity. For example, having recently used a custom GPT trained on the UK Building Safety Act, despite multiple iterations of varied prompts, it could not summarise the minimum building height requirements that had been clearly identified in the documentation.
Generative AI for Media / GraphicsAfter mass training models to identify objects in images (via humans using verification/I am not a robot), media generation/production has become mainstream and relevant to our industry in a broad way to support ideation, mood boards etc. However, as mentioned above, limited industry-specific dictionaries mean using ‘Kitchen’ as a prompt will get you somewhere. However, specifics about timber cabinetry or variation terminology like upper or lower cabinets won’t get you any further.
AI in design and construction
Machine learning and the broader field of AI has seen limited use in the architecture, engineering or construction industry. They are not part of the core tools we use.
This is partly due to the relatively slow digital transformation of the industry into a data-driven sector but also down to the limited data structure in the tools we use today.
The first wave of value from AI?
The AEC industry is rife with repetitive processes. Yet, we always try to reinvent the wheel from project to project. Across our firms, experience and knowledge is often reapplied on each project from first principles, without any thought about learning from previous projects. Given the unstructured and inconsistent wealth of data, drawings and models across our industry, AI’s first wave of value will likely be in the ease of accessing and querying historical project data. A mix of language models and object/text recognition will help us harvest the experience we have built up across delivered projects.
For example, for an architectural firm designing a tall commercial building in the city, there’s obvious and existing experience that defines the size of a building core. The structure’s height might determine the number of lifts/elevators, minimum stair quantities, and critical loading dimensions. The floor area and desk density may define how many toilets
are needed within the core. If challenged to re-explore the envelope of the building mass, these changes would influence that core’s previous design and dimensional parameters. As we slightly increase the building envelope, leading to more desks, we tip the ratio for more toilet cubicles, reducing the leasable footprint. Despite tackling this challenge repeatedly across projects, with the current software stack, it’s common to do these calculations manually every time. Currently, the ‘automation’ or ‘intelligence’ comes from an architect or engineer who has been doing this for 20 years and has the experience (occasionally a script) to provide quick insight. Suppose AI can enable rapid discovery of experience from previous projects into a framework for future projects. It will create more time to explore improvements and emerging techniques.
The second wave of value from AI? Following the first wave of connecting unstructured data and the resulting deeper understanding/training of the data produced in our industry, AI’s second wave of value will be around better insight from assessing geometry, data, and drawings, unlocked by an open framework of data (see part 1)
With a deeper understanding of AEC dictionaries and exposure to 1,000s of data-rich 3D models filled with geometries and associated data, AI can now
propose contextual suggestions. As covered in the User Experience chapter (see part 4) , this might, at a simple level, mean that tools can process natural language requests, instead of interacting with complex interfaces, toolsets, and icons.
Another example of value following industry-specific machine-learned intelligence, would include tools that help us find our blind spots or oddities in the models and data we create, such as gaps, missing pieces of data, incomplete or likely erroneous parameters or drawing annotations.
This instantly brings significant value to designers, engineers, constructors and manufacturers to substantially improve the quality of information we generate (whilst significantly reducing the risks of exchanging incomplete information).
The third and further waves of value?
Following contextually accurate, structured data with better insight from geometry, data, and drawings, AI has a deeper understanding of how we generate and deliver information. It’s finally wellplaced to understand our deliverables. It can support and augment the delivery of mundane and repetitive drawing production and model generation.
Automated drawings? More on this in the next chapter, Deliverables.
Where are we now?
Our sector has a significant opportunity to apply automation and AI principles to increase the efficiency of design ideation, design development, and the generation of data and drawings to collaborate with others. Our unstructured, inconsistent data has provided no easy win or low-hanging fruit for developers to apply emerging technology to such a niche industry.
As a result, whilst we can lean on tools developed for generic industries, like large language models and generative models for images, there is no more relevant, usable application of AI within the core tools we use today.
We cannot leverage historic design experience project-to-project or quickly revisit previous approaches.
Whilst every software startup in the
market is touting themselves as AI-enabled, it’s yet to be seen how they leverage AI and how it is ‘learning’ from users securely and responsibly.
What the industry needs?
As the premise of the data framework progresses, tools will have a better structure and hierarchy of construction packages/sets, exposing data at an entity component system (ECS) level, enabling training models and future use of AI. This structure will enable software to understand better the relationships of modelled geometry, their associated data and that which is relevant and collaborated on by 3rd parties. This wealth of exposed interactions will be essential in training machine learning to augment mundane and repetitive processes.
Leveraging historical data, design decisions, and the logic of existing projects can help us enable project-to-project experience, reducing the need to reinvent the same wheel each time.
Tools that can understand our outcomes will be well-placed to machine-learn the steps taken to get there. These highly repeatable processes can then be augmented by the software we use to provide designers and engineers more time to focus
on better design outcomes.
Tools that have learnt from our outputs can highlight possible risks for us to review and fix before exchanging information with third parties.
Using AI technologies to help harvest, discover, suggest, and simplify the generation of deliverables like data and drawings is entirely different from automating design generation. You’ll automatically think this is coming from the place of ‘turkeys not voting for Christmas’, but how can AI generate design effectively? Generating great design is not based on rules and principles but on creative thought, emotion, and the relationship to specific context and use. I’m sure a tool can develop 10,000 ideas for a building, but how many are relevant? Are they appropriate by relating to the people who’ll occupy and live in the space? Form vs function? What are the appropriate materials and suitable construction options? Do they have an appreciation of the historical context or existing building fabric? You’ll expect me to say this, but we’re not looking for tools to design buildings because it’s too emotive. We need tools that augment our delivery of great design—automated design intelligence, not automated design generation.
Lack of industry comprehension: Midjourney generated image from a prompt for timber lower cabinets, which caused confusion
Deliverables
For over 100 years, the primary method for commercial and contractual exchange between designers, engineers, constructors, manufacturers and clients has been best recognised through the production and exchange of drawings.
We produce a product (drawing), generated over many hours, including years of previous experience and knowledge, and then exchange with another party. These are very often referenced as required contractual deliverables in agreements between parties.
In the last decade, with a more digitised approach to developing and sharing design and construction information, we’ve evolved from printing drawings, rolling them in tubes, and manually delivering them to partners to using cloud-based exchange platforms. These have enabled us to transact this exchange of drawings in the web (Extranet solution forming part of a Common Data Environment). Additionally, through the use of more intelligent design software, we’ve been enabled to associate data within geometry. As such, this chapter refers to deliverables, including drawings, documentation, and data.
The future of deliverables through augmented automation?
When architects were master builders, drawings initially started as instructions or storytelling of building and intertwining materials to achieve the desired outcomes (building Ikea furniture or Lego models, if you like).
Today, ‘Instructional Drawings’ equates to less than 5% of the types of drawings we now produce. It wouldn’t be uncommon for a complex project (scale and number of contributors) to create at least 5,000 drawings through a project’s design, construction and handover. Each was developed for different purposes, from demonstrating fire risk compliance to providing suitable information for tendering a construction package/set of information. Somewhere between five and ten hours are invested to generate, coordinate, revise, and agree on each drawing.
Generating drawings represents one of the most considerable cost implications to the design and delivery of construction, yet it is comprised of highly repetitive tasks which apply to every type of project.
For example, architects spend hours developing a fire plan drawing for each floor of every proposed project within our firms. These plans are broadly 80% the same from floor to floor and project to project. We might apply graphical templates to show, hide and alter the appearance of modelled content to ‘look right’ and then spend time adding descriptions, text, dimensions, specifications, performance data callouts, etc. At least on the same project, this is almost the exact process for each floor. This is significantly consuming for tall buildings with 30+ storeys.
The future of deliverables with a strong data framework?
Where a Data Framework (see part 1) enables design and construction team members to collaborate and exchange data more equitably and flexibly, will we still need to generate and share the same volume of traditional drawings?
Suppose we can communicate the relevant information for tendering a construction package/set through shared access to geometry and the relevant data and parameters. In that case, a manufacturer may have all the information they need digitally without needing a set of 100+ drawings and schedules?
Additionally, when collaborating in such a digitised way (Data Framework), does it still make sense that we are planning information production for printed drawings for an A1 or A3 (24 x 36) sized piece of paper? Given the unlikely event that it will ever be printed?
Where are we now?
With that said, whilst many would like less documentation and traditional drawing production, formal drawings are still the product of most firms. Unfortunately, this is not going to change soon, so we need to look towards a future of AEC software that still has to cater to a component of drawn information.
Whilst more object-based modelling tools make the generation of coordinated drawings easier than traditional 2D drafting, there is little other automation, let alone AI-based workflows, that assist in generating final deliverables, including drawn content and other digital outputs.
Much of the effort expended throughout the design process is on curating sets of these drawings, sometimes disproportionately to the amount of time spent on actual design. Amongst many of the frustrations felt by project teams that have given rise to the creation of this specification, this is one of the most contentious. Designers want to design. When the legal framework still centres on drawings as the main deliverables, and the software we use caters for that in a manual fashion, then the time is ripe for change.
What the industry needs?
Creating more drawings is not an economical way to resolve questions between design and construction teams or reduce risk. That said, we need tools that enable drawings and other deliverables to be generated efficiently and intelligently.
We need design software platforms that have the ability — either through manual input or data-driven intelligence — to largely automate the output of drawn information and submission models to suit both the organisation, the specific project, and wider influencing standards.
The needle of our processes should be moved from focusing the majority of our time on representing a design — through the generation of drawings and associated data — to actually doing the design work. The generation of design documentation and digital deliverables should be incidental to the design process, not vice versa, as it sometimes feels today.
We need design software platforms and tools that enable data and geometry transacting on the Data Framework, to minimise unnecessary drawings from being generated. All whilst supporting a record of information exchange and edits in a fully auditable way to resolve legal concerns.
WHERE’S CLIPPY? WHERE IS THE APPROPRIATE USE OF MACHINE LEARNING AND AI?
IT LOOKS LIKE YOU’RE TRYING TO MAKE A FIRE PLAN?
I CAN GET YOU 90% OF THE WAY?
Access, Data and Licensing
When presenting this specification at NXT DEV back in July 2023, given that much of the audience was software developers, startups and venture capitalists, it was very much appropriate also to cover an area outside of functionality and experience of ideal software - Access, Data and Licensing.
Today, we have a diverse portfolio of design apps and tools. The Future Design Software Specification promotes using the best tools for the job, which means more software, not less. As such, every firm needs to assess and justify the tools they use as a business. And so, we must talk about commercials because in an even more competitive market, especially with market pressures and challenging economies, we need to be able to position and justify a software’s value.
How do you quantify value?
For example, my company commissioned a detailed report of our design software portfolio last year, which explicitly focused on the tools and software we use, their classification, usages, our business’ skill and experience, barriers, workflows/ interoperability, tools feature set, their roadmap/lifecycle and importantly commercials, access and licensing.
This helped us benchmark the tools we’re using, where they are sat within a wider delivery mechanism and to some extent which tools to keep paying for. But that is one company’s view. What about the collective view of all authors of the spec? Well, that’s what the spec represents. Each of the ten sections are our universal key issues and measurables of software we want and would happily pay for.
Access and licensing?
We understand the upfront investment required in developing software, marketing, ongoing technical maintenance, support and implementing features as the product matures. We get it. The approaches for accessing products and licensing have evolved significantly in the last five years and are set for even more disruption (more on this below). We don’t
propose to offer a ‘silver bullet’ win-win model that works for everyone. In reality, a combination of models is likely the answer. But to be clear, we are all mature, profitable businesses that want to pay for the tools that provide real value aligned with the Design Software Specification.
We want modern tools in the hands of our users, achieving better efficiency and value. However, our hands are often tied through some of these licensing models.
However, those who put together this specification can offer our experiences with those models offered today:
% construction value - Charging by a % construction value moves the decision maker of which design tools to use to the one ‘paying all the bills’, the Client/ Budget Holder/Pension Investment Fund. Do you think they are best placed to make the commercial decisions about the design tools the project team should use? How effective or frankly time-consuming do you think it would be for design firms to have to justify each tool they want to use to, as an example, a pension investment portfolio funding the project? This is in addition to the complexities of how the construction value of a project is calculated and agreed upon and the ongoing administrative processes in place as the project budget fluxes and evolves through to completion.
Assigned users - Some might say how ‘good’ we’ve had it with shared or a floating licence pool within our companies (maximum number of concurrent users who can release licences for others when not in use). That’s versus assigned licences by a user, where anyone using the product, no matter how infrequently, requires an assigned licence. Naturally, this is great for investors, as on paper, many customers now need 50% more licences, but in the long term, that’s like loading a gun and aiming at your foot. With those floating licensing models that supported users ‘sharing’ a licence up to a concurrent amount, we effectively gave everyone access to your products. This enabled the organic growth of usage for the product as we found value. We’d buy
more licences as your tools provided more value to more of our designers. In contrast to assigned licences, if we must buy every designer a dedicated licence, at full cost, for a product not yet at its organic fit, we will probably end up restricting access to the products to a few who are our early adopters. That limitation is unlikely to produce positive results for either the software developer or the design firm’s efficiency.
Consumption - Those contributing to this specification are mature businesses needing to forecast and plan operation expenditure against forecast fee earnings from our projects. Predictability and planning are a must for us. Licensing models that require us to gamble and roll the dice predicting what level of token, consumable or vendor currency we need to buy to access tools over the year ahead is a lose-lose for everyone. We either resent buying too many that expire or didn’t buy enough (digging into unbudgeted costs). In this model, we can’t see how any design firm would reflect positively on the tool’s positioning value. Like the previously mentioned model, the result is likely for the firm to remove access to tools to avoid accidental or unplanned consumption, not a positive move for the increasing usage or the design firm realising the product’s actual value.
Overusage models - So long as there are accurate, accessible metrics for usage, consumption, and structures to provide warnings/flags regarding usage, this model offers two main benefits: A fairer view of value by ‘only paying for what we use’ and the flexibility for firms to quickly adopt, grow and encourage usage of products (compared to the limitations of fixed licence assignments or limited users or tokens). However, I don’t think anyone would positively reflect on unforeseen or unexpected bills at the end of the month. Hence, metrics and warning structures are critical for this model.
Future evolution of license models? Today, all the above models value the software’s consumption through dedicat-
The ADDD Marketplace is the eCommerce website dedicated to discovering AEC software & services.
Benefits
Software Vendors
Reach your ideal customers. Easy store creation & management.
Increase traction & sales.
Software Users
Diverse AEC software options. Informed decisions from user reviews & descriptions.
2050 MATERIALS
2050 Materials API integrates climate data into design workflows, accessing 150,000+ products, 7,000 materials, and 1M+ data points for sustainable choices.
Site operations software for construction contractors. Insite removes slow, manual, and paper-based processes, enhances site-to-office communication, and tracks progress, quality and safety to ensure timely project delivery.
We are Remap. We are design technology generalists. We build software, develop add-ins for BIM environments and offer digital transformation consultancy. Get in touch - info@remap.works
By tapping into live data from your Revit models, we offer you real-time insights into the health of your projects. Our live data audits save time and resources.
BEAM bridges Rhino, Revit & IFC for seamless BIM workflows. Trusted by global AEC firms, BEAM empowers architects to effortlessly create BIM elements in Rhino and transfer them to Revit or IFC.
Satori is a collaboration platform with a humancentric approach. Empowering organisations to efficiently design, build and operate with digital assets across their Real Estate.
Search amazing software, see our featured products below!
Howie is your AI-Copilot, in-house expert, who never retires, is always up to date, and extracts valuable insights from your data. New Gen Project Management System.
Simplify ISO 19650 compliance with integrated OIR, PIR, EIR, AIR, BEP, TIDP, MIDP, and more. Try automated model verification in our Clever Data Environment (CDE).
empowers architects, developers, real estate agents and municipalities to get quick and accurate access to building regulations through AI.
STRUCK
Access, Data and Licensing (continued)
ed seats, usage, or tokens.
In the not-so-distant future, it is highly likely that through the introduction of further Automation, Machine Learning and AI, the interaction with software (quantity of time, users, engagement or need for as many tools) will very likely be reduced.
As a result of AI, will we all reduce the number of licences, tokens, or products we need? How will the software vendors then position access and licensing models? How will product data and running services be monetised? The authors of this specification aren’t best placed to determine what this might look like but are well-positioned to support software developers in how this works for everyone.
Where are we now?
Some leading AEC tools in use today have shifted towards poor value offerings over the past 5-10 years, where the ‘actual’ cost of using a single design tool has become almost incalculable against the genuine production output by a designer. A best-in-class modeller has a poor value position when only available as part of 30 tools, especially where the majority are not updated or reflect modern software functionality. These spiralling costs
to design firms become extremely hard to justify when these products do not fulfil fundamental design requirements or are not evolving at the pace of the AEC industry.
Many of the licensing models covered above lead to customers having to restrict or remove access to software to control expenditure. For emerging tools finding their place within design and engineering firms, these restrictions throttle the ability to quickly find organic usage and prevent firms from realising efficiency gains.
Running combinations of models, such as dedicated licenses and consumption models, has dramatically increased administration for large companies, constantly needing to review and reassign licences to avoid over consumption and obtain the true value of what has been purchased (contrasted to pool/shared licences with zero overhead admin).
What the industry
needs?
Flexible access to software promotes and encourages product growth within a firm. Software accessing and resolving data or geometry across a data framework becomes a valuable feature of the software. However, it should not be a
chargeable function for the customer/user.
For many, especially larger firms, longer-term models better support their ability to budget and plan design software usage.
Providing functionality for administering licences, users, consumables or over usages should be the primary focus of software developers before launching new licence models. While providing customers with tools to get the best value from the software may not seem commercially advantageous, it vastly increases the trust and comfort each year around renewals and increases usage and loyalty between parties.
For example, functionality to highlight users in the ‘wrong’ model or advise where usage is peaking and teams are facing limitations preventing them from working. Suppose software developers can demonstrate real usage (value) through statistics. In that case, renewal conversations will be straightforward and focused on achieving more value in the year ahead. We want to pay for what we are using, and transparent usage metrics should not be an extra/chargeable offering to any user, let alone enterprise customers.
150-page report assessing how software is used to deliver all parts of AHMM’s architectural work