Building Information Modelling (BIM) technology for Architecture, Engineering and Construction March /April 2024 >> Vol.130
drawings Coming to a CAD/BIM system or cloud service near you soon GENERATED WITH ADOBE FIREFLY
performance Real time feedback with Enscape Sound advice Acoustic performance with Treble iGPU comes of age Prime time for 3D CAD / BIM
Autonomous
Building
THE FUTURE OF AEC technology www.nxtbld.com TIM FU TIM FU
- 26 June 2024
available
In association with GOLD SPONSORS SILVER SPONSORS
25
Tickets
now
THE FUTURE OF AEC software development www.nxtdev.build
In association with
Queen Elizabeth II Centre / London GOLD SPONSORS SILVER SPONSORS
Industry news 6
AEC technologies emerge for Apple Vision Pro, Unreal Engine and Twinmotion get new licensing, Alice uses AI to optimise Primavera P6 schedules, plus lots more
Autodesk to take over VAR payments 13
New changes to the Autodesk business model could be set to diminish the role of the CAD reseller
Workstation news 14
Intel Core Ultra laptop processors, Nvidia Ada Generation RTX GPUs for CAD, plus new workstations from HP and Dell
Cover story: the dawn of auto-drawings 18
Several CAD software firms are making real progress in drawing automation in the race to eliminate one of the AEC sector’s biggest bottlenecks
NXT BLD / DEV 2024 28
AI, automation, digital fabrication, BIM 2.0, data specifications, open source, automation, and lots, lots more at AEC Magazine’s London conferences
Enscape: building performance analysis 30
Enscape is to get a new module, powered by IES technology, that gives instant visual feedback on building performance
Enscape and V-Ray: a collaborative future 32
Chaos has big plans to enhance workflows between Enscape and V-Ray, boost real time collaboration, and more
Smart reality capture 34
A new integrated reality capture solution from Looq uses computer vision, AI and a proprietary handheld camera with GPS, to capture infrastructure at scale
Treble: sound advice
38
New software helps analyse and optimise designs for acoustic performance
TestFit runs free 42
The Texas-based design automation software developer releases a free massing tool for architects
Informed Design 44
Autodesk connects BIM (Revit) with fabrication (Inventor) via the cloud to support modern methods of construction
Scaling-up
Facit Homes brings new hope to the need to build houses and digitise fabrication
Laptop processors with integrated GPUs are now powerful enough for 3D CAD. Does this mean a cheaper, slimmer future?
5 www.AECmag.com March / April 2024
on-site digital construction 46
Prime time for iGPU 48
Building Information Modelling (BIM) technology for Architecture, Engineering and Construction FREE SUBSCRIPTIONS Register your details to ensure you get a regular copy register.aecmag.com editorial MANAGING EDITOR GREG CORKE
EDITOR MARTYN DAY
EDITOR STEPHEN HOLMES
advertising GROUP MEDIA DIRECTOR TONY BAKSH tony@x3dmedia.com
MANAGER STEVE KING
SALES & MARKETING DIRECTOR DENISE GREAVES denise@x3dmedia.com subscriptions MANAGER ALAN CLEVELAND alan@x3dmedia.com accounts CHARLOTTE TAIBI charlotte@x3dmedia.com FINANCIAL CONTROLLER SAMANTHA TODESCATO-RUTLAND sam@chalfen.com AEC Magazine is available FREE to qualifying individuals. To ensure you receive your regular copy please register online at www.aecmag.com about AEC Magazine is published bi-monthly by X3DMedia Ltd 19 Leyden Street London, E1 7LE UK T. +44 (0)20 3355 7310 F. +44 (0)20 3355 7319 © 2024 X3DMedia Ltd All rights reserved. Reproduction in whole or part without prior permission from the publisher is prohibited. All trademarks acknowledged. Opinions expressed in articles are those of the author and not of X3DMedia. X3DMedia cannot accept responsibility for errors in articles or advertisements within the magazine. 30
greg@x3dmedia.com CONSULTING
martyn@x3dmedia.com CONSULTING
stephen@x3dmedia.com
ADVERTISING
steve@x3dmedia.com U.S.
Resolve brings design review to Apple Vision Pro, blending 3D models with 2D docs
Resolve, the AEC-focused design review solution, is now available for the Apple Vision Pro, the much-hyped mixed reality headset that launched last month.
Resolve is a collaborative VR tool that is well established on the Oculus Quest VR headset. It includes native integrations with Autodesk Construction Cloud (ACC) and Procore, where models are automatically kept up to date in VR without having to export or upload with each design change.
According to Resolve CEO, Angel Say, one of the key benefits of Apple Vision Pro in BIM and AEC workflows is in its ability to ‘supercharge multi-dimensional multitasking’. In other words, viewing crisp 2D documents in a 3D spatial environment thanks to the headset’s incredibly high-resolution display.
“It means you can have your 2D content that is an established part of the [AEC] industry and isn’t going anywhere, and then seamlessly transition into a 3D spatial environment, whether that’s overlaying a model on top of your environment or being fully immersed in a 3D model,” he explains, adding that this workflow has been hard to achieve on other devices.
“For example, you can open up the ACC
or Procore iPad app and navigate it as if it were an iPad floating in front of you. And then in the click of a button, jump into a 3D environment in an application like Resolve, but bring that iPad app with you and suddenly you have your ACC issues floating in front of you. And you’re running multiple apps at the same time. But in different dimensions. One is 2D, one is for 3D, and you can go back and forth however you see fit.
“In the Resolve app on the App Store, you can interact with these 2D PDFs and
it feels really smooth. You can zoom in and out with ‘pinch to zoom’ like you’re used to. You can drag and scroll around the 2D PDF. Of course, you can pull up issues and click on them and see them in your model.”
Users can also leave spatial annotations on designs to track key issues and tasks, sync annotations back to ‘industry tracking tools’, and control the opacity of models so they can see through to the environment around them.
■ www.resolvebim.com
Graphisoft BIMx available for Apple Vision Pro
Graphisoft has released BIMx –BIM Experience for Apple Vision Pro, an interactive app for exploring BIM projects and linked documentation sets created in Archicad and DDScad.
The first version of BIMx on Apple’s visionOS supports full immersion mode in 3D. However, Graphisoft explains that navigation in the virtual model is experimental and will be fine-tuned.
“We’re actively working on solutions to bring you the immersive experience you crave. We have big plans for its evolution, with more features and enhancements scheduled for future updates,” said Graphisoft’s Emoke Csikos.
■ www.graphisoft.com
Nvidia to extend reach of Omniverse with new Cloud APIs
Nvidia is making its Omniverse Cloud industrial digital twin platform available as a series of five application programming interfaces (APIs) to make it easier for software developers to integrate core Omniverse technologies for OpenUSD and RTX into their existing design and automation software applications.
According to Rev Lebaredian, Nvidia VP
of Omniverse and simulation technology, this gives ISVs all the powers of Omniverse: interoperability across tools, physically based real time rendering, and the ability to collaborate across users and devices.
In its simplest form, software developers can embed Omniverse powered viewports into their 3D apps, giving their apps ‘instant’ real-time physically based rendering – pixel
streaming from the cloud. Software developers can also use the API to connect Generative AI tools into their existing apps, as well as a range of workflow tools for OpenUSD data.
The Omniverse Cloud APIs are being adopted by several software developers including Ansys, Hexagon, Microsoft, Siemens, and Trimble.
■ www.nvidia.com/omniverse
6 www.AECmag.com March / April 2024
Nvidia streams colossal 3D models into Apple Vision Pro
Nvidia has introduced a new service that allows firms to stream interactive Universal Scene Description (OpenUSD) scenes from 3D applications into the Apple Vision Pro mixed reality headset.
The technology makes use of Nvidia’s new Omniverse Clouds APIs (see story left), using a new framework that channels the data through the Nvidia Graphics Delivery Network (GDN), a global network of graphics-optimised data centres.
“Traditional spatial workflows require
developers to decimate their datasets – in essence, to gamify them. This doesn’t work for industrial workflows where engineering and simulation datasets for products factories and cities are massive,” said Rev Lebaredian, VP of Omniverse and simulation technology at Nvidia.
“New Omniverse cloud APIs let developers beam their applications and datasets with full RTX real-time physicallybased rendering directly into vision pro with just an internet connection.”
■ www.nvidia.com
PlanRadar enhanced with reality capture
PlanRadar is to get a new AI-powered ‘SiteView’ feature, designed to enhance the digital construction platform’s documentation, and reporting capabilities.
SiteView allows users to capture 360° imagery of a project while walking the site
with a 360° camera attached to a helmet. Images are automatically mapped onto a 2D plan, creating a ‘detailed visual record’ of activity across every stage of the build. Reality capture images are automatically transferred ready to review in PlanRadar
■ www.planradar.com/product/siteview
App helps assess biodiversity net gain
Temple, a UK environment, planning and sustainability consultancy, has launched a new application to help AEC firms make decisions on development plans and meet the new biodiversity net gain (BNG) legislation.
The legislation demands that all new construction projects deliver at least 110%
of the biodiversity value found on a site prior to its development.
Temple developed the app using Esri’s GIS technology. Designed to simplify BNG assessments, the software is said to streamline the workflow from field data collection to in-office assessment and provide a real time BNG score.
■ www.tinyurl.com/BNG-assess
Ideate Automation links to ACC
Ideate Automation, a software tool designed to automate tasks in Revit, now integrates with Autodesk Construction Cloud.
With the new ‘no-cost’ Ideate Cloud Connector, project teams can ‘seamlessly integrate’ with Revit models managed in Autodesk Build, Autodesk Docs, or BIM 360, enabling Ideate Automation’s scripting capabilities to run time-intensive BIM tasks in the background.
Examples of tasks that can be started in Ideate Automation manually or scheduled to run at regular intervals include: opening large Revit files, running the Revit ‘audit and compact’ process, exporting data from Revit into PDF, DWG, NWC, and IFC file formats, and publishing files to Autodesk Construction Cloud.
■ www.ideatesoftware.com
Looq launches reality capture platform
Looq AI has launched the Looq platform, a ‘onestop solution’ for surveyors, contractors, asset owners and engineers to capture infrastructure, in minutes, with ‘survey-grade accuracy’.
The platform is powered by what the company describes as a novel and fundamental computer vision and AI technology. There’s also a hardware component, a handheld proprietary ‘Q’ camera, which combines four high-resolution cameras (on either side, on top and on front) with ‘surveygrade’ GPS and an AI processor.
■ www.looq.ai
See page 34 for our interview with Looq AI CEO, Dominique Meyer
7 www.AECmag.com March / April 2024
News
New licensing for Unreal Engine and Twinmotion Nemetschek
E
pic Games is changing the way its Unreal Engine, Twinmotion and RealityCapture tools are licensed for firms in the AEC sector, and other industries other than game development.
Users at companies that are generating under $1m in annual gross revenue will be able to access the tools for free. However, firms that go over that threshold, will be subject to new seat-based enterprise software pricing from late April with the launch of Unreal Engine 5.4.
The annual cost per seat of $1,850 will include access to Unreal Engine 5.4 (and future versions released during the subscription period) along with Twinmotion for architect-friendly real
time viz and RealityCapture for creating realistic 3D models from photos. Twinmotion and RealityCapture will still be available to purchase individually. Twinmotion seats will be priced at $445 per year, and includes access to Twinmotion Cloud, as well as all updates released during the subscription period. Individual seats of RealityCapture will cost $1,250 per year, starting with RealityCapture 1.4.
Epic Games has also announced its goal to ‘fully integrate’ Twinmotion and RealityCapture with Unreal Engine by the end of 2025. The plan is to make them ‘fully accessible’ within the Unreal Editor.
■ www.unrealengine.com
■ www.twinmotion.com
Tekla 2024 structural tools launch
Trimble has introduced the 2024 versions of its Tekla software for constructible BIM, structural engineering, and steel fabrication management.
Tekla Structures 2024, Tekla Structural Designer 2024, Tekla Tedds 2024 and Tekla PowerFab 2024 boast an enhanced user experience and better collaboration through connected workflows, among other new features.
Tekla Structures 2024 offers ‘enhanced interoperability’ between Trimble software, hardware and third-party solutions. With support for open standards such as BCF (BIM Collaboration Format), users can
communicate model-based issues among project collaborators. buildingSMART properties are also supported by improved and extended IFC property sets. Automated fabrication drawing cloning has also been enhanced, with a view to benefiting steel and precast cast unit drawings creation.
Structural design and analysis software Tekla Structural Designer 2024 has been enhanced with ‘Staged Construction Analysis’, which takes into account that there is a sequence in construction and loading. This ‘fully automated’ process can be applied to the design of both concrete and steel structures.
■ www.tekla.com
Allplan, Frilo & Scia join forces
The Nemetschek Group is to bring together three of its AEC software brands – Allplan, Scia and Frilo – under the Allplan umbrella.
Allplan, a Germanheadquartered provider of BIM solutions, will be joined by structural engineering software brands Scia, based in Hasselt, Belgium, and Frilo, located in Stuttgart, Germany.
According to Nemetschek, the merger will provide a unique solution for architects and engineers. Allplan offers a single BIM platform from first concept through detailed design and multidisciplinary planning to prefabrication and construction deliverables.
Scia and Frilo deliver software for structural analysis and design, including both the analysis and design of 3D multimaterial structures as well as a ‘component orientated calculation approach’.
■ www.allplan.com
Scia Engineer gets new multicore solver
Scia Engineer 24, the latest release of the structural analysis software from Scia, part of the Nemetschek Group, has launched with a brand-new multi-core solver, which offers faster calculation times.
The new solver also allows for more control over the calculation process, enabling users to monitor ongoing tasks, review results, and interrupt processes as required.
Scia Engineer 24 also marks the release of one complete version, so users don’t need to navigate between different configurations.
■ www.scia.net
8 www.AECmag.com March / April 2024
News
ARCHICAD AI VISUALIZER
powered by Stable Diffusion
a modern residential building with wooden surfaces and a roof garden
Impress your clients with AI-assisted inspiration
www.graphisoft.com/uk
Alice uses AI to optimise Primavera P6 schedules ROUND UP Carbon Calculator
UK structural and civil engineering consultancy Perega has launched a CO2 accounting tool – The Carbon Calculator – to help its clients assess the whole carbon footprint of their projects, from inception to completion, and make greener material choices for a more sustainable built environment
■ www.perega.co.uk
New CEO for Bentley
Bentley Systems has announced that effective July 1, 2024, Greg Bentley will transition from Bentley Systems’ CEO to executive chair of its board of directors.
Nicholas Cumins, currently COO, will then be promoted to CEO
■ www.bentley.com
RSK invests in GIS
RSK Group, a specialist in sustainable solutions, has extended the use of Esri geospatial technology across its business including environmental impact and biodiversity net gain surveys for the water, renewable energy, property and construction industries
■ www.esri.com
Remap launches
Remap has been launched by two former Hawkins\Brown associates. The built environment consultancy offers a range of services including digital transformation strategy, computational BIM & digital design, 2D / 3D application development and design to construction solutions
■ www.remap.works
Fastest desktop CPU
Intel has launched the Intel Core 14th Gen i9-14900KS processor, which is being billed as the world’s fastest desktop processor. The processor delivers up to 6.2 GHz max turbo frequency out of the box, which will help accelerate CAD and BIM software
■ www.intel.com
MCAD exchange
BIMDeX has launched a suite of CAD interoperability products designed to help engineers exchange design data between leading mechanical CAD (MCAD) tools and Revit, AutoCAD and IFC. There are separate exporters for PTC Creo, Solidworks, Rhino, Autodesk Inventor and more
■ www.bimdex.com
Alice Technologies has added the ability to import Primavera P6 schedules directly into Alice Core, a new, enhanced version of its AI-powered construction scheduling software.
With Alice Core, contractors and asset owners can upload their baseline P6 or Oracle Primavera Cloud schedule and the software will then simulate scenarios to look for ‘optimised schedules’. Once the best path forward has been identified, it can be exported back to Oracle’s scheduling products.
According to CEO Rene Morkos, it is the most significant change to the software since it was first released in 2015.
“By making it fast and easy to bring information from Oracle’s scheduling products to Alice and back again, we are
enabling Primavera users to build on the investment they’ve made in their existing schedules,” said Morkos.
“We’re excited to see both contractors and asset owners maximise the impact that AI can have on their businesses with Alice.”
With the launch of Alice Core, Alice Technologies has also joined Oracle Partner Network (OPN).
Alice uses AI to analyse a project’s complex building requirements, generate efficient building schedules, and adjust those schedules as needed during construction. According to the company, users can simulate thousands of options in seconds with the software, then test different scenarios to find the optimum solution, leading to time and cost savings and reduced risk on projects.
■ www.alicetechnologies.com/alice-core
Revit add-in addresses sustainable design
Symetri has launched Naviate Zero, an Autodesk Revit addon for sustainable building design, which aims to address the need for reduced embodied carbon and CO2 emissions within the building industry.
The software focuses on the early stages of a building’s life cycle by helping designers and engineers to calculate emissions, and make informed decisions directly within the Revit platform.
The software builds on Symetri’s strategic partnership with One Click LCA, a global platform for lifecycle assessment, environmental product declaration, and
sustainability. By utilising One Click LCA’s databases, Naviate Zero provides architects and designers direct access to global building material data.
■ www.naviate.com
10 www.AECmag.com March / April 2024
Enscape to visualise energy efficiency of building designs
Chaos is working on a new Building Performance Module for its architectural visualisation software Enscape that will enable users to visualise the energy efficiency of buildings while designing in real time. Later this year the company will also launch a new ‘story telling’ solution codenamed Project Eclipse that allows users to ‘rapidly enhance’ their Enscape and V-Ray scenes.
The Enscape Building Performance Module is powered by technology from IES. According to the developers, it will help drastically reduce the amount of time required for architects to achieve optimum energy efficiency for their designs and improve sustainability. Users will be able to visualise aspects such as daylight energy usage, and
comfort analysis. The aim is to help architects see the impact of their earlystage design decisions and to help create more sustainable buildings.
“[The Building Performance Module for Enscape] will place the power of insightful decision making directly in your hands and democratise analysis,” said Phil Miller, Chaos. “Bringing this to Enscape connects the BIM model with performance analysis, and within the real time visualisation mode you already know. Your analysis will dynamically update with your changes, just as your visualisation updates within Enscape.” ■ www.enscape3d.com
Turn to page 30 for more on the Building Performance module and page 32 for more on Project Eclipse and collaborative workflows
Graphisoft shares Archicad benchmarks
Graphisoft has published Archicad benchmark scores comparing the new Apple Macbook Pro with Apple M3 Max silicon to older generation Apple computers, including a Mac Studio with an Apple M1 Max, Mac mini with Apple M1 and iMac with Intel Core i7. According to the company, compared to an Intel Core i7-based iMac, the new M3 Max chip is up to 2.5x faster at opening files, up to 3x faster in section generation and display, up to 3x faster in documentation layout update and display, up to 2x faster at rendering with
Cineware and Redshift engines.
For the Archicad benchmark tests, the company used three different model sizes: a 343 MB residential project model with 1,014 3D elements, a 530 MB commercial project model with 19,965 3D elements and a 2.26 GB stadium model with 94,944 3D elements.
As one would expect, the performance leap from Apple M1 and Apple M1 Max silicon is not as large, but still significant. No comparison is made with Apple M2 silicon. The full benchmark results can be seen at the link below.
■ www.tinyurl.com/archicad-mac
IES tracks building performance
IES has launched a new cloud-based platform designed to utilise energy models throughout a building’s lifecycle. With IES Live, an IES ‘Performance Digital Twin’ can be hosted online and connected with operational building data with a view to enabling continuous tracking and improvement of building performance.
According to the company, IES Live allows sustainability, energy and facilities teams to take control of building operation, reduce energy risk, increase resilience, unlock netzero potential, and deliver healthy and comfortable spaces.
The platform can display near real-time energy and carbon emission performance data from utility meters, BMS systems and IoT sensors, against a predicted ‘ideal’ energy benchmark, and, if available, occupancy data.
■ www.iesve.com/ies-live
Laser scan navigation
C
intoo has improved the way in which users can navigate lasers scans on its ‘Visual Twin’ platform using a ‘teleportation camera’.
Previously, scan to scan navigation was limited to the scan set up locations. Now users can teleport by pointing, clicking and setting the desired height.
Combined with TurboMesh, the mesh streaming engine, Cintoo explains that this provides the user with ‘unlimited virtual vantage points’ at the same resolution as the source scan.
Teleportation can also be used to navigate imported BIM or CAD models, to compare to as-built scans.
■ www.cintoo.com
11 www.AECmag.com March / April 2024
News
Leading US firms extend investments in Revizto ROUND UP
Going underground
Israeli startup Exodigo has secured $105m in Series A funding for its non-intrusive subsurface mapping solution for construction and utility firms. The Exodigo system combines advanced sensors, 3D imaging and AI to give a clear picture of underground conditions
■ www.exodigo.com
Newforma Konekt
A bridge between the Newforma Konekt platform for Project Information Management (PIM) and Procore’s contract administration platform is designed to support the ‘efficient review of submittals and RFIs’ by design teams
■ www.newforma.com
Bricklaying robots
Monumental, a startup that makes bricklaying robots, has raised $25m in funding. The money will help the company grow its team of engineers, scale the number of robots it can deploy on sites across Europe and increase the types of blocks and tasks the robots can manage
■ www.monumental.com
HVAC design
BlueGreen Engineering used FineHVAC BIM software from 4MCAD to design the HVAC system for a new Breast Cancer facility in the Westmead Hospital in New South Wales, Australia. The 3D BIM model helped maximise the utilisation of space within the building
■ www.4msa.com/brands/finehvac
Wireless networks
Amrax has partnered with network planning solution provider Ranplan Wireless to help designers use 3D modelling to create efficient wireless networks. Ranplan’s customers can visualise and optimise the design of networks using 3D models captured with an iPhone Pro or iPad Pro
■ www.amrax.ai
BuildData rebrands
BuildData Group is bringing together its brands to operate under Zutec. The BuildData Group name will no longer be used and the Createmaster, Createmaster Information Management, and Resi-Sense brands have been changed to Zutec
■ www.zutec.com
Revizto provides project teams at HED with a single source of truth
AEC collaboration software provider Revizto has secured enterprise agreements with two major US firms — HED, an architecture and engineering firm headquartered in Royal Oak, Michigan, and Hoar Construction, a contractor headquartered in Birmingham, Alabama.
Since 2017, HED has utilised Revizto for design coordination, quality control, and model clash detection. The new agreement will help expand its use of the software across its 400+ active projects in healthcare, education, housing, mixeduse, manufacturing, science, workplaces, and community design, and enable its team to invite an unlimited number of cross-functional users into any project.
“Revizto provides our project teams
with a single source of truth. If they need to review anything from the project details on the sheets to the coordination of the integrated disciplines in the model, they can turn to Revizto,” said Jason Rostar, CM-BIM, HED’s corporate practice technology leader. “We are very excited to start utilising the Revizto clash tools, as well as the new Phone App, which will give us access to critical information when out of the office and on the construction site.”
Hoar Construction has been using Revizto for six years to improve collaboration among its teams. The company will be using Revizto on various projects in the healthcare, higher education, office spaces, entertainment, and mixeduse development market sectors.
■ www.revizto.com
Hexagon joins forces with Nemetschek
Hexagon’s Geosystems division and the Nemetschek Group, which owns leading AEC brands including Graphisoft, Allplan and Vectorworks, have formed a strategic partnership with a view to ‘accelerating digital transformation within the AEC/O industry’.
As a first step, the partnership will look to drive the adoption of digital twins. The two companies aim to provide customers with tools, services and expertise for an ‘end-to-end digital twin workflow’ by joining the up-to-date building data through Hexagon’s reality
capture solutions with building operations powered by Nemetschek’s dTwin. Hexagon offers ‘end-to-end’ reality capture and Scan2BIM solutions to automatically capture accurate and realtime field data, as well as AI-powered solutions for building analytics and simulations, to generate progress insights. dTwin, Nemetschek’s digital twin platform, delivers data-driven insights and helps customers to manage facilities from design to operations. It is claimed to be the first solution that fuses all data sources of a building in one overarching view.
■ www.hexagon.com
■ www.nemetschek.com
12 www.AECmag.com March / April 2024 News
Autodesk
to take over payments from customers, diminishing the role of the VAR
Throughout its history, Autodesk’s business model has been constantly evolving. Someone joked on LinkedIn recently that ‘Autodesk’s business model is to get you to the next business model.’
For end users, this continual change has been seen in the form of product bundles, Suites, Collections, proprietary to subscription, tokens, removal of network licensing, mandatory named user licensing, etc. The rapid evolution has meant that many firms are operating the same software under different licensing conditions.
Autodesk also regularly changes the business models of the value added resellers (VARs) that sell its software. Over the years it has altered product margins, MDF (co-op market development funds), targets, incentives, promotions or execution. With each new amendment, it’s then down to the VAR to funnel their resources to try and meet the new goals.
For the last thirty plus years this magazine has witnessed this familiar dance between resellers and Autodesk. However, over the last seven years, with the internet and digital fulfilment, it’s become increasingly obvious that resellers have felt the squeeze like never before. Autodesk has been taking an increasing number of clients direct, while margins have been reduced.
Some VARs have responded by developing their own software, increasing their consultancy levels, banding together or acquiring other resellers to benefit from economies of scale. Some went out to get venture capital to create huge resellers than spanned the globe.
Last year in Australia and New Zealand, a region where Autodesk likes to test out changes to its business model, we heard that Autodesk was taking over the order processing of all the reseller deals. While the reseller would still actively manage the customer base and sell software services (SaaS) and their own services, Autodesk would take payment direct.
In February 2024 this was rolled out to the USA, and we hear it will happen in the UK next year. This represents a substantial change to Autodesk’s role within customer sales and will significantly alter the cashflows of VARs — especially the very large ones. It gives Autodesk insight
into every deal, every discount and it benefits Autodesk from the positive cash flow of the sales of all its software, paying commissions back to the reseller.
It’s hard to predict the outcome of all of this but it would indicate that the street price of Autodesk products are set to move closer to the RRP and web prices that are available.
For resellers this may actually be a good thing, in terms of income, as Autodesk dealers competed mainly amongst themselves and discounted to the bone to win business. We’ve seen numbers quoted of $600 million given away in discounts by the channel.
Now that competition can’t be based on price, the large resellers that relied on that cash flow might struggle as the economy of scale is somewhat neutered. It also seems one step closer to the inevitability of Autodesk going fully direct. However, Autodesk’s applications are complex and there will always be a market for training, consultancy services and implementation, which Autodesk will not have the local resources to fulfil.
Autodesk’s three yearly Enterprise Business Agreements (EBAs) are for customers of a certain size, but this has been reducing in qualification of seat count / value of company for a while. In the future, EBAs might become the best way to get discounts, but they come with the constraint that they are notoriously hard to get out of. On leaving, all the
software used would need to be repurchased at street price.
Some AEC firms decided that buying through resellers at discount and avoiding EBAs gave better flexibility. We think that these alternative routes will make less business sense as things progress.
While this is being presented as a change in process, it feels like the beginning of a change in business model and route to market. It’s like someone taking over driving a car in stages, as they swap seats. One has to wonder just what this will do to the number of resellers in the long term. At the recent Autodesk ‘One Team’ VAR Conference we hear there were some rather heated conversations.
Customers in the USA have had emails which ask them to set up Autodesk as a vendor on their internal procurement systems before their next purchase. Quotes and services are still handled by the reseller.
Autodesk states the benefits to customers will be a more ‘Personalised Service’, although, surely, this only applies to firms that never see their resellers in the flesh and a streamlined process with self-service tools to set up payment, subscription terms and renewals.
The other benefit mentioned by Autodesk is ‘Predictable pricing: enjoy consistent pricing, ensuring the best value for your investment’.
If, as we predict, street prices go up with no real route to negotiate, some customers may consider this plain cynical.
13 www.AECmag.com March / April 2024
Autodesk will soon collect direct payment from customers for software like Revit
Intel Core Ultra laptop processors launch with integrated CAD optimised GPUs
Intel has launched its new vPro platform with Intel Core Ultra laptop processors that will form the foundation for mobile workstations from Dell, HP, Lenovo and others.
For users of CAD and BIM software, the big news is the built-in Intel Arc GPU, which is said to offer up to twice the graphics performance of the previous generation, plus advanced features such as AI and hardware ray tracing.
Mobile workstations from major OEMs will also ship with a new dedicated Intel Arc Pro workstation graphics driver which comes with Independent Software Vendor (ISV) certifications and ‘enhanced performance optimisations’ for creative, design and engineering software Intel is currently working on certifica-
tions for several CAD tools including Autodesk Fusion, Autodesk Inventor, Siemens Solid Edge, Siemens NX, Dassault Systèmes Solidworks, Nemetschek Vectorworks, Bentley MicroStation, PTC Creo and more. Certifications for digital content creation tools, Autodesk Maya and 3ds Max are also in progress.
Intel Core Ultra processors also feature a dedicated AI acceleration capability spread across the central processing unit (CPU), graphics processing unit (GPU) and the new neural processing unit (NPU). In some AI workflows, the GPU still delivers the best performance, but the NPU is more power efficient. According to Intel, the new Intel Core Ultra 7 165H uses up to 36% less power than the previous generation Intel Core i7-1365U in certain video conferenc-
What AEC Magazine thinks
From the independent CPU benchmark scores we have seen, we don’t expect Intel Core Ultra to offer a significant performance benefit over Intel’s previous generation. Performance in single threaded CAD software might be on par, but highly multithreaded software, such as ray trace rendering, may get a boost thanks in part to the two additional LPE cores. However, it’s not in CPU
performance that Intel’s new laptop processors are likely to garner most interest among architects and engineers. With the significantly more powerful Intel Arc GPU in the Intel Core Ultra H-Series and a new dedicated Intel Arc Pro workstation graphics driver, the new chip could deliver sufficient performance for some 3D CAD workflows, so users do not need to buy a mobile workstation with a discrete GPU, such as the new
ing operations such as background blur in Zoom, leading to longer battery life.
Intel Core Ultra comprises two product families: the Core Ultra H-Series and Core Ultra U-Series. The H-Series features up to sixteen cores, comprising Performance cores (P-Cores), Efficient cores (E-Cores) and new Low Power Efficient (LPE) cores, for very lightweight tasks.
The top end Intel Core Ultra 9 185H processor features 6P, 8E and 2LP cores, and has a max P-Core frequency of 5.1 GHz. However, with a base power of 45W and a max turbo power of 115W, the chip looks best suited to high-end mobile workstations. For highly portable laptops expect to see a more limited choice of processors up to the Intel Core Ultra 7 165H. This CPU has the same number of cores, but a slightly lower max P-Core frequency of 5.0 GHz and a lower base power of 28W.
Meanwhile, the Intel Core Ultra U-Series uses less energy and comes with a base power of 15W and a max power of 57W. It achieves this by reducing the number of P-Cores to two, while retaining the same number of E and LP cores. In single threaded CAD workflows, performance shouldn’t be far off that of the H-Series. However, users can expect significantly lower multi-threaded performance. The U-Series also has less powerful Intel graphics. The top-end Intel Core Ultra 7 165U has a max P-Core frequency of 4.9 GHz.
■ www.intel.com
Nvidia RTX 500 Ada Generation.
Of course, for professional apps like 3D CAD, performance can mean very little without driver stability, and Intel’s investment in its discrete desktop and mobile Arc Pro graphics cards over last 18 months should certainly help here. However, Intel also faces stiff competition from AMD and its Ryzen Pro 7000 / 8000 Series processors, which also feature an integrated GPU, and a much more mature
pro graphics driver.
The other notable benefit of Intel Core Ultra is the power efficient NPU, which can be used to offload AI processing tasks that previously would need to be done on the GPU or CPU. With a growing need for AI processing in laptops including noise suppression and background blurring in video conferencing, the chip’s AI capabilities should prove more important moving forward.
14 www.AECmag.com March / April 2024
news
Workstation
HP ZBook G11 mobile workstations unveiled
HP has unveiled the G11 editions of its ZBook mobile workstation family (Power, Fury, Studio and Firefly), featuring a broad range of processors including Intel Core Ultra 5, 7 & 9 or ‘next-generation’ AMD Ryzen Pro with dedicated NPUs. All the new HP ZBooks have 16-inch displays (the Firefly G11 also has a 14-inch option) and come with a choice of Nvidia RTX Ada Generation Laptop GPUs.
The HP ZBook Power G11 features a new premium 16-inch design that is slightly deeper than the G10 edition. It’s available with Intel Core Ultra 9 or AMD Ryzen 9 Pro processors and comes with a choice of lap top GPUs, including the new Nvidia RTX 500 (see page 17) and RTX 1000 Ada, up to the RTX 3000 Ada.
NVMe PCIe 4 x 4 SSD. It starts at 2.04 kg and is 22.9mm thick.
The HP ZBook Fury G11 is billed as the most powerful ZBook with a choice of
up to 16 TB NVMe PCIe 4 x 4 SSD. It starts at 2.4 kg and 27.7mm.
The HP ZBook Studio G11 is HP’s premium thin and light mobile workstation, starting at 18.3mm and 1.73kg. It comes with a choice of Intel Core Ultra processors, up to Nvidia RTX 3000 Ada Laptop GPU, up to 64 GB SODIMM DDR5 memory and up to 4 TB PCIe 4 SSD.
The AMD version only goes up to the Nvidia RTX 2000, but can also rely on AMD Radeon Graphics built into the CPU, which features a pro graphics driver. The laptop supports up to 64 GB SODIMM DDR5 memory and up to 2 TB
‘desktop class’ Intel Core HX processors, including 13th Gen Intel Core (with vPro) or 14th Gen Intel Core (non vPro). It can be configured with the top-end Nvidia RTX 5000 Ada Laptop GPU. It supports up to 128 GB SODIMM DDR5 memory,
The HP ZBook Firefly G11 is at the budget end of the range and comes in 16-inch and 14-inch form factors. It’s available with Intel Core Ultra 7 or AMD Ryzen 9 Pro 8945HS processors and up to Nvidia RTX A500 Laptop GPU. Like the HP ZBook Power G11, it can also rely on the AMD Radeon Graphics built into the CPU, which should be powerful enough for mainstream 3D CAD workflows.
The 14” HP Firefly G11 A (AMD) starts at 1.4kg, the 14” HP Firefly G11 starts at 1.45kg and the 16” HP Firefly G11 starts at 1.76kg. All three models come in at 19.8mm.
■ www.hp.com/z
Dell claims smallest footprint 16-inch mobile workstation
Dell has launched five mobile workstations built around the new Intel Core Ultra processor. This includes the 14-inch Precision 3490 and Precision 5490, the 15.6-inch Precision 3590 and Precision 3591, and the Precision 5690, which it claims to be the world’s smallest footprint 16-inch mobile workstation.
All five laptops feature the new Intel Core Ultra laptop processors, which combine Central Processing Unit (CPU), Graphics Processing Unit (GPU) and Neural Processing Unit (NPU) in a single piece of silicon (see story left)
There’s a choice of 15W, 28W or 45W models depending on the machine. Some models can be configured without a discrete GPU and still be adequate for 3D CAD workflows, taking advantage of the much-improved Intel Arc graphics and Arc Pro driver with certifications for many leading CAD tools.
The Precision 3490 and Precision 3590 look best suited to entry-level 3D CAD, with a choice of processors up to the 28W
Intel Core Ultra 7 165H and up to Nvidia RTX 500 Ada (4 GB) graphics. The 3490 starts at 1.40 kg and the 3590 at 1.62 kg.
The Precision 3591 offers more powerful processors up to the 45W Intel Core Ultra 9 185H vPro and Nvidia RTX 2000 Ada (8 GB) graphics, which takes it into the realms of entry-level visualisation. It starts at 1.79 kg.
All three Precision 3000 Series laptops feature displays with a maximum resolution of 1,920 x 1,080 (FHD).
The Precision 5490, which Dell claims to be the world’s smallest and most powerful 14-inch mobile workstation, offers more powerful graphics options up to the Nvidia RTX 3000 Ada (8 GB). It comes with a choice of displays up to the QHD+, 2,560 x 1,600 resolution, 60Hz, anti-reflection, touch, 100%sRGB, 500 nits, wideviewing angle, PremierColor panel with active pen support. It starts at 1.48 kg.
Meanwhile, the Precision 5690 goes all the way up to the Nvidia RTX
5000 Ada (16 GB) graphics for more demanding visualisation workflows. The Precision 5690 is the only new model to offer a 4K display, with an OLED touch, 3,840 x 2,400, 60Hz, 400 nits WLED, Adobe 100% min and DCI-P3 100%
www.AECmag.com
Nvidia RTX 2000 Ada GPU for compact desktop workstations launches
Nvidia has added the Nvidia RTX 2000 Ada Generation (16 GB) to its family of desktop workstation GPUs based on the Ada Lovelace architecture. The $625 add-in board launched last month at Dassault Systèmes 3D Experience World in Dallas, Texas – a significant location given that product designers and engineers are a key target audience (along with architects).
The Nvidia RTX 2000 Ada Generation is a low-profile dual slot board, designed to fit in small form factor and ultra-compact workstations like the HP Z2 SFF / Mini and Lenovo ThinkStation P3 Ultra. It comes with both a half-height and fullheight ATX bracket, so it can be used in full sized towers as well.
The Nvidia RTX 2000 Ada Generation comprises 2,816 CUDA cores delivering
12 TFLOPs of peak single-precision performance, 88 Tensor (AI) cores delivering 192 TFLOPs of peak performance and 22 RT (ray tracing) cores delivering 28 TFLOPs of peak performance.
The Nvidia RTX 2000 Ada Generation (16 GB) will replace the Ampere-based Nvidia RTX A2000 (12GB) and provide a much more price competitive option than the $1,250 Nvidia RTX 4000 SFF Ada Generation (20 GB), which launched last year.
The Nvidia RTX 2000 Ada looks identical to both GPUs and shares many of the same characteristics — four mini DisplayPort 1.4a connectors and a 70W max board power (so it doesn’t need an external 6-pin connector). However, unlike the Nvidia RTX 4000 SFF Ada, it does not support Nvidia Quadro Sync II or 3D stereo.
What AEC Magazine thinks
The Nvidia RTX 2000 Ada Generation is an important release for Nvidia as it hits the price/performance sweet spot for architects, engineers and product designers who want to take their design workflows beyond CAD/BIM and into the realms of visualisation.
Prior to the launch, users had to pay a significant premium to get a low profile Ada Generation board (Nvidia RTX 4000 SFF Ada), or go for the older Nvidia RTX A2000, which has less memory and does not support DLSS 3.0 which can make a big difference to real time
Target workflows for the Nvidia RTX 2000 Ada Generation include 3D CAD, BIM (Solidworks, Siemens NX, Revit, etc.) and, in particular, design-centric visualisation and VR. Compared to the Nvidia RTX A2000, Nvidia reports superior performance in rendering (1.5 times better in KeyShot, 1.5 times better in V-Ray and 1.18 times better Solidworks Visualize) and in VR (1.4 times better in Enscape and 1.3 times better in KeyShot).
Nvidia also highlights much bigger rendering performance gains over 2019’s Nvidia Quadro P2200, a card that many users will likely be making an upgrade from. The Nvidia RTX 2000 Ada Generation is up to four times faster in Solidworks Visualize. This big leap is probably because the 5 GB GPU does not have dedicated ray tracing cores, but possibly because it’s under-resourced in terms of memory.
As the race for AI continues, Nvidia is also keen to highlight the Nvidia RTX 2000 Ada Generation’s AI credentials. According to Nvidia, the GPU doubles the AI throughput compared to the previous generation, thanks to fourth Gen Tensor cores. With text to image software Stable Diffusion, which can be used for architectural concept design, Nvidia reports 1.6 times better performance compared to the Nvidia RTX A2000.
Nvidia says a lot of work has been done ensure that all of its RTX Ada Generation GPUs are able to run AI inferencing on the desktop but admits that the RTX 2000 Ada won’t be a ‘powerhouse for AI training’.
■ www.nvidia.com
performance in tools such as D5 Render, Chaos Vantage, Autodesk VRED and Nvidia Omniverse.
But what about the low end of the pro GPU market? For now, CAD users must still rely on the Nvidia T400 or T1000, which lack the dedicated Tensor cores
for AI or RT cores for ray tracing. As CAD applications start to introduce ray tracing into the viewport and, especially as AI becomes more important, we would not be surprised if Nvidia introduces entry-level Ada Generation RTX desktop cards soon.
16 www.AECmag.com March / April 2024
Workstation news
Dell unveils 14th Gen Intel Core workstations
Dell has updated its entry-level desktop workstation line-up with the launch of the Precision 3280 CFF (compact form factor) and Precision 3680 Tower. Both workstations feature 14th Gen Intel Core Processors and come with a choice of Nvidia RTX GPUs.
The Dell Precision 3280 CFF is billed as the world’s smallest workstation that supports Tensor core GPUs, referring to the optional Nvidia RTX 2000 Ada Generation or Nvidia RTX 4000 SFF Ada Generation. Both GPUs are well suited to CAD and visualisation workflows in applications including Enscape, Twinmotion and V-Ray. The machine can also be configured with CAD-focused GPUs, including the Nvidia
T1000 and AMD Radeon Pro W6400.
The compact chassis is designed for flexibility. It can be set on a desk, mounted on the back of a monitor, or kept in the datacentre or server room, for access over a 1:1 connection. Seven units can be stored in a 5U rack space.
The Dell Precision 3280 CFF can support a range of 14th Gen Intel Core CPUs up to 65W (which can be driven at 80W), but not the top-end 125W models.
The Dell Precision 3680 Tower is a more traditional desktop workstation offering more expandability, and higher spec CPUs. It comes in at 373 x 173 x 420mm. It supports a wide range of 14th Gen Intel Core CPUs, up to the 125W Intel Core i9-14900K vPro. According to Dell, this can be run up to 253W for extended
periods and, rather than having a limited Turbo experience of 15 to 30 seconds, the Turbo is now unlimited. This sustained performance is in part due to a new ‘Premium Cooling Solution’ featuring an air shroud and enhanced side panel venting, which is designed to keep the system running cool and quiet.
■ www.dell.com
The compact Dell Precision 3280 CFF can be mounted behind a display
Ray tracing and AI comes to entry Nvidia Ada laptop
GPUs
Nvidia has added new entrylevel models to its line-up of Ada Lovelace pro laptop GPUs. The Nvidia RTX 500 and 1000 Ada Generation will be available soon in highly portable mobile workstations from Dell, HP, Lenovo and MSI. They will join the Nvidia RTX 2000, 3000, 3500, 4000 and 5000 Ada Generation laptop GPUs which launched last year.
The Nvidia RTX 500 and 1000 Ada will replace the Ampere-based RTX A500 and A1000. On paper, there’s a significant generation-on-generation improvement in single precision floating-point performance (TFLOPs), although the new GPUs
have a higher TGP (total graphics power) so some of this may be because the GPUs can draw more power. There’s a much bigger leap in AI performance thanks to fourth generation Tensor cores.
AI is central to Nvidia’s messaging around the new chips. The company points out that next generation mobile workstations with Ada Generation GPUs will include both a neural processing unit (NPU), a component of the CPU [think Intel Core Ultra], and an Nvidia RTX GPU, which includes Tensor Cores for AI processing. The company states that the NPU helps offload light AI tasks, while the GPU provides additional AI performance for
What AEC Magazine thinks
It’s an interesting time for entrylevel mobile workstations. Nvidia is facing increased competition from AMD and Intel who are introducing CPUs with increasingly powerful integrated GPUs that can deliver good performance for mainstream 3D CAD. It means there is less need for a discrete Nvidia GPU in certain workflows, potentially saving money and power.
For example, the AMD Ryzen Pro 7000 Series, at the heart of the HP ZBook Firefly G10 A and Lenovo ThinkPad P14s, can deliver good performance in Solidworks, Revit and other CAD/BIM tools. New Intel Core Ultra laptop processors with Arc Pro graphics offer a similar single processor value proposition (see page 48 for more on this)
Of course, with a discrete
more demanding day-to-day AI workflows.
Nvidia claims the higher level of AI acceleration delivered by the GPU is essential for tackling a wide range of AI-based tasks, such as video conferencing with high-quality AI effects, streaming videos with AI upscaling or working faster with generative AI and content creation.
According to the company, the new Nvidia RTX 500 GPU delivers up to 14x the generative AI performance for models like Stable Diffusion, up to 3x faster photo editing with AI and up to 10x the graphics performance for 3D rendering compared with a CPU-only configuration.
■ www.nvidia.com
GPU, Nvidia can scale up performance, and not just in terms of core 3D graphics or hardware ray tracing. Nvidia’s Tensor Cores can significantly boost AI processing. At the moment, most of the benefits for AEC firms are either generic (text-to-image or background blur on video calls) or focus on photo editing or visualisation. Many viz tools include AI
de-noising or Nvidia DLSS 3 which boosts 3D performance by generating additional highquality frames.
As more AEC software firms embrace AI, applications are likely to grow, although with a growing trend for browser-based BIM tools and centralisation of data, much of this processing could be handled by GPUs in the cloud or in servers.
17 www.AECmag.com March / April 2024
18 www.AECmag.com March / April 2024 GENERATED WITH ADOBE FIREFLY
The productivity promise of auto-drawings
In a world swimming in AI-related hype, several CAD software firms are making serious progress in the race to eliminate workflow bottlenecks using new technologies. Martyn Day looks at the field of drawing automation and identifies some of its frontrunners
Throughout their history, desktop CAD systems have been sold on the promise of delivering one overriding benefit: improved productivity when compared to manual drawing.
That makes sense, since productivity improvements are always sought by firms looking to save money and to increase their competitive edge. In these highly digitised times, the deployment of technology has become a key differentiator in the AEC world.
While it took twenty years for most AEC firms to move from drawing boards to 2D CAD, many like to think that the adoption of 3D CAD and BIM has been faster. In fact, that move has also taken the best part of two decades. And many of those firms using BIM tools have still not progressed as far as they might, remaining resolutely 2D-centric. On the other hand, who can blame them, when the primary deliverable of most contracts – and the most frequently cited target in legal wrangles – are drawings?
BIM was sold to 2D CAD users as a way to model their buildings and get automatic sections and elevations as a base for drawings, with the added benefit of coordinated documentation updates when model designs change.
In fact, what has happened is that the number of drawings produced has mushroomed. Many BIM seats are not involved in design at all, but instead focus on documenting the relatively poor automated output of 3D models.
A cynic might applaud the CAD software firms for increasing the costs associated with a documentation seat and driving appetite for complex document management systems to handle greater volumes of drawings. After all, many firms take the output from BIM tools such as Revit and perform refinements to drawing output in AutoCAD - the king of the 2D drawing world. However, this undermines the benefit of keeping both models and drawings in the BIM environment, breaking the links needed to coordinate updates to sections and elevations.
As documentation mushrooms, and clients increasingly demand BIM deliverables (even if they only do so because that’s what their consultant advises), the result is a heftier workload that consumes available resources and delivers a serious hit to the bottom line.
Many design IT managers have told me in confidence that they wish that drawings would simply ‘go away’. They would prefer the model to be the deliverable from which all data is extracted by con -
19 www.AECmag.com March / April 2024
Cover story
tractors and project managers.
But here, they are at odds with the industry’s foremost institutions, such as the Royal Institute of British Architects (RIBA) and the American Institute of Architects (AIA), not to mention complex legal frameworks that are still largely predicated on drawings.
Here to stay
Another question I hear regularly is, “Why can’t AEC be more like manufacturing?” The misconception here is that once manufacturing designs move into CAD, the importance of drawings is greatly diminished.
But it might shock readers to hear that many manufacturing sectors – and in particular, automotive and aerospace – are just as burdened by the need to produce manufacturing drawings to accompany their 3D models, typically for reasons relating to fabrication and legal compliance. And this is the case even where parts are 3D printed directly from a 3D CAD model.
With that in mind, it’s safe to say that drawings are simply not going to ‘go away’. They will continue to be essential deliverables, although uses may change and their relevance in some areas may decline.
ings. When Gehry’s practice switched to modelling buildings in 3D CAD and started sending Catia models to fabricators, he saw a corresponding drop in quotes and bids were suddenly aligned to within 1% of each other. As a result, his buildings were viable – but Gehry still produces 2D section drawings in AutoCAD in order to detail more standard aspects of designs. In other words, models offer clarity in situations of complexity, but for most buildings, simple rectangles are hardly likely to bust budgets.
Auto-drawings on demand
When I first met Greg Schleusner, HOK’s director of design technology, at a Bricsys (BricsCAD) meeting in Stockholm, Sweden in 2019, we quickly got talking about automated drawings. He explained to me that this was something the industry desperately needed, but that nobody seemed interested in developing the technology.
‘‘
drawings that we must make. So not only are the drawings themselves costly, but we also have this interoperability problem that needs to be solved. We have to send model data between design tools to make drawings. An open auto-drawing tool would allow design teams to stay in Rhino,” said Schleusner.
“What do you need to make drawings? Do you need an accurate model? If we solve drawing automation, we are going to have to make better models. If we make better models – hey, we have better models! We are going to have to improve what we model to get a better drawing. Right now, we have so much drawing work, that we all end up ‘faking stuff’ in drawings, just to get them out the door. I actually think auto-drawings is a very complementary idea to making better models.”
BIM 2.0’s killer feature is not going to be cloud; it’s going to be automation. Auto-drawings won’t just improve the speed of drawing production. They will ultimately mean fewer skilled people being tied up in documentation
This is not to say, however, that there is nothing to be done about this situation. In fact, modern mechanical CAD (MCAD) tools do a much better job of automatically creating drawings and reducing repetitive tasks than today’s BIM tools, even for very complex parts and assemblies. That said, even with this progress, it’s estimated that engineers spend between 25% and 40% of their time refining drawings.
This is frustrating, because in modelling a building, a car or an aircraft, most design and engineering decisions have already been made. That makes it hard to justify excessive amounts of time spent creating 2D drawings from a model, a process that adds no real value to a project.
And as complexity increases, the case for drawings starts to fall apart. Take, for example, Frank Gehry, who couldn’t get any of his buildings made until he began using Dassault Systèmes Catia and McNeel Rhino. The drawings were so hard to understand that fabricators would bump up costs in order to accommodate the impact of inevitable misunderstand-
The case he put forward was clear. At that time, HOK was spending around $18 million per year on creating and managing drawings and construction docs (inclusive of staff costs, software and so on). Schleusner reckoned that, if the firm could automate 60% of the work associated with construction documents, or maybe 50% of the work associated with drawings, it would cut those costs in half. Even partial automation had the potential to deliver significant cost benefits.
While researching the topic, Schleusner discovered a Korean open-source project conducted for a building authority that focused on taking in IFC data and creating floor plan drawings for fire code review (www.tinyurl.com/KBim-D-Generator). Labels, text, drawing grids and dimensions were all added by the programme when the automated 2D drawings were viewed. He was impressed by the quality of the results. Everything displayed was dynamically created on the fly. It didn’t exist in the dataset.
“I think, at least from my perspective, the problem is we import all this other stuff into Revit. That is so much of our design effort, just to waste time to make
Since 2019, Schleusner has been bending the ears of leaders at many of the biggest CAD vendors, trying to spark their interest in auto-drawings. Now, with the advent of automation and AI, those vendors are scrambling to identify where might be best to apply new technologies. Since drawings represent ‘low-hanging fruit’, interest in autodrawings has exploded.
Most of the early auto-drawing solutions will begin by offering configuration-based output (drawing templates). But Schleusner is keen to see work begin further upstream, at the project level. This would involve understanding what drawing sets are required by project managers and project architects, not just automating ‘the drawing’ but also automating project document sets. This would require the equivalent of HTML and CSS for drawing content, settings and how it is all put together.
“If the industry can solve this autodrawing problem as a service, it could shatter the market into one thousand solutions, which is perfect. I don’t think you’ll need monolithic applications, because drawings are external and it’s an assembly process.”
In passing, Schleusner has also lamented that PDF is the ultimate industry deliverable, as historic constraints of standard drawings sizes 1:100/200 and so on limit the digital canvas. He explains that one of the reasons the industry makes so many drawings is because of the physical limitations of displaying text
20 www.AECmag.com March / April 2024
’’
legibly within paper documents. “Couldn’t we just send one bloody big PDF drawing, which contractors never print out? If you can’t see something, then zoom in! While certain trades really liked this idea, it’s true that some want to print things out, which is a limitation. But if we can’t get the digital delivery of drawings right, how are we going to ever do the digital delivery of models?”
Greg Schleusner will be speaking at AEC Magazine’s NXT BLD and NXT DEV conferences on 25 / 26 June 2024 at London’s Queen Elizabeth II Centre (www.nxtbld.com)
The Gräbert view
Gräbert is based in Berlin and has been a key independent DWG software developer since 1994. It has armed many of the best-known CAD players with OEM DWG capabilities but is also a solutions developer in its own right. DWG tools that fall under its ARES Trinity brand (www.tinyurl.com/ares-trinity) work on desktop, mobile and cloud, where they compete head on with Autodesk AutoCAD and LT. You will find Gräbert tech in Snaptrude, Draftsight (Dassault Systèmes), Trimble, Solidworks, Onshape – all serious players in the AEC and MCAD worlds, which require good DWG capabilities and want to leverage Gräbert’s cloud-based DWG Editor in particular.
Gräbert has been developing automated drawing technology for a number of years. The company is now at the stage where it has put code into the hands of some clients to test out. Thanks to the company’s broad reach in the CAD community, you can expect to see automated drawing features appear from competing CAD firms, aiming at Autodesk AutoCAD / LT and Revit. Some, such as Draftsight, will choose to embed these capabilities in their own CAD programmes. Others may opt to make autodrawings a cloud service, where users just drag and drop their IFC or RVT into the service and get back drawings for all the floors of their projects.
Gräbert’s technology does not currently use AI, but is procedural and based on template configurations. In my dealings with Gräbert, I have to say that the company is unusual, in that it doesn’t oversell its products. In fact, it’s more likely to downplay its capabilities and is not all about pushing the Gräbert brand. In many ways, this reminds me of McNeel, the developer of Rhino.
I recently caught up with CTO Dr Robert Gräbert to discuss the company’s
auto-drawing development. He explained that with templates and rules, it’s possible to get quite far in auto-drawing creation without AI. AI approaches look good for investors, he said, but template views offering a variety of layout styles are already available in MCAD applications such as Autodesk Fusion.
And, on the subject of Autodesk, he foresees Autodesk adding similar template-based drawing layout technology, either in Revit or on the cloud as a service.
“We think drawings are not going to go away. You’re not going to get to a future where you’re just going to exchange models, but you don’t want to spend more time working on drawings,” explained Dr Gräbert. “The fact is that BIM modellers have done a poor job of producing documentation. Users are frustrated that the drawings themselves don’t contain a lot of information. They’re just ‘the output’, they don’t capture anything.”
He continues: “There was this hope that if we moved to exchange 3D models, plus metadata, we were going to get better collaboration within industry. It’s a big vision and it’s attractive, but I think what we’ve seen is the reality that it’s very difficult to manipulate these models outside of dedicated authoring tools like Revit or others.”
really materialised in a meaningful way.”
According to Dr Gräbert, every BIM tool today gives the user the ability to create content, view it, deliver section views, plan views, vertical sections. It gives them the geometry that they’re going to use to create that view from the model, and that’s already 90% of drawing automation.
“All we are talking about today is, ‘We know you need fifteen sheets, and you need a view per floor. And we know that you’re probably going to want to tag your rooms. And probably, you want to have some dimensions aligned to it.’ We’re really talking about annotation and view orchestration on automation.”
‘‘ BIM modellers have done a poor job of producing documentation. Users are frustrated that the drawings themselves don’t contain a lot of information. They’re just ‘the output’, they don’t capture anything
Most software vendors have no interest in exchanging models in a neutral format, he believes. In other words, they want users to adopt their own format. “And legally we’ve never got to the point where the model is accepted as the deliverable. It’s an additional deliverable, potentially,” he adds.
Dr Robert Gräbert, CTO Gräbert ’’
“The biggest proponents of BIM are large owners, because they can reap the benefits of all their constituent subcontractors doing this together. Clients now say, ‘I want to see a 3D BIM model’, but what hasn’t happened is that this model is not taken from design, through construction and into operation. That never
Gräbert has decided to differentiate between what it can automate and what it should automate. This is a big part of its work in this area over the past year –and the focus is very much on continual improvement. “It’s still not perfect, but we’re working very closely with Snaptrude and others. We can ingest an IFC or Revit, and we can output the PDFs. Now we have put that behind a web service. It’s still usable as a desktop product: if you are an ARES user, you can bring in IFC or RVT, and we will auto-generate floor plans with sections, as to what we think is appropriate. We have solved simple problems, like, how do you figure out where to put all the labels and everything else? We just implemented an approach that makes sure that labels never overlap and then you can tweak and modify after.”
The web service is a kind of design automation API, he explained. It will create a job transform order for any template, request the model be processed and the results placed in a project Slack channel. “So, you get an updated PDF output. That’s probably not a real use case, as somebody would download and want to do tweaks. But the output is already very good. It’s just a long road to address the fundamental building blocks to get full automation,” he said.
Gräbert is now experimenting on how to
21 www.AECmag.com March / April 2024
Cover story
make the process more customisable. Users would configure a local job, and the cloud service would process it. For now, the cloud features are still in the early days of development and require a monitoring system to create a commercial grade offering. Gräbert also doesn’t have a price tag for this service yet. This is currently under evaluation, along with scoping out the kind of firms that might find this useful, such as AutoCAD, Revit and ARES users.
Pricing in the auto-drawings market could become a very interesting battleground. If drawings become 50% automated, firms like HOK will not pay half their budget for this kind of automation. Software firms pushed out of subscriptions for seats might push for value-based billing, perhaps a percentage of project value, but this is unpopular as hell.
As Dr Gräbert mused during our conversation: “There’s going to be some resistance in the market to a ‘value’ based fee here, because customers want to pay a fixed price for this. Maybe a price per project could be something that software vendors could offer, but this is a reason why big groups of CAD users are meeting up, because they don’t want to pay vendors a portion of their revenue. They want to pay a fixed fee, ideally, once and never again – and if they have to pay annually, so be it, but that’s all.”
Powered by Gräbert
I also talked with Jonathan Asher, Catia global sales director at Dassault Systèmes. He confirmed that the Gräbert technology is being used in his company’s MCAD software Solidworks and that its DraftSight programme is built on top of Gräbert’s ARES technology.
Auto-drawings were announced at Dassault Systèmes 3D Experience World in February 2024, a technology that will be available to Solidworks users through the Dassault Systèmes 3D Experience cloud platform. The focus is very much on MCAD users, but obviously DraftSight (www.draftsight.com) will get the technology into the AutoCAD and AEC markets. Nobody is sure of how much it will cost.
‘‘ We are going to have to improve what we model to get a better drawing. Right now, we have so much drawing work, that we all end up ‘faking stuff’ in drawings, just to get them out the door
Greg Schleusner, director of design technology, HOK ’’
Unlike the many startups arriving fresh to the AEC market, Gräbert’s long-time stability, and its income from licensing its DWG technology and selling ARES, mean that the company doesn’t have venture funding. As such, it completely lacks the ‘big story’ pitch that attracts investors. Nor does it face shareholder pressure to deliver huge returns. It works with firms it likes and solves the problems its customers raise. So, if all this takes some time, Gräbert can just keep going.
Dr. Robert Gräbert will be speaking at AEC Magazine’s NXT DEV conference on 26 June 2024 at London’s Queen Elizabeth II Centre (www.nxtdev.build)
Meanwhile, Snaptrude (www.snaptrude.com) represents the next generation of cloud-based BIM tools and, while the company has been mostly working on the 3D part of the RVT file format, by licensing Gräbert’s DWG technology, it will get a cloud-capable 2D DWG engine, without having to reinvent the wheel.
Snaptrude has been experimenting with the auto-drawing capability, which I saw in January 2024, running autodimensions. It’s still early days, but when Snaptrude has figured out its play, it’s likely the company will be able to quickly enable it for every user.
Trimble Drawing is also based on Gräbert ARES. As the company is part of this OEM family, I am pretty sure that Trimble will also be offering auto-drawings soon.
Powered by AI
Eventually, I suspect AI will feature in most auto-drawing solutions. To start with, there will be procedural AI, providing configurations and so on. From there, we will see more companies attempting to use AI to learn drawing layouts and perform automation functions, such as auto dimension, auto label, auto title
block, auto table, auto grid and so on.
To figure out where all this might lead, it’s useful to first look at Swapp. Based in Israel, Swapp’s initial application of AI to Revit promised us a miracle. What it proposed was to take users from 2D sketches to fully detailed BIM, with all the drawings involved, in the time it took them to go and have lunch.
Along the way, Swapp would learn from past projects and then automate Revit detail models – a proposition that favours standard, rectangular, predictable buildings, such as student blocks, hospitals and offices.
That was the message delivered by Swapp’s chief science officer Adi Shavit at AEC Magazine’s NXT BLD 2023 (www.nxtdev.build/speakers/adi-shavit). More recently, I caught up with Shavit to discuss auto-drawings. As he explained, the company has pivoted its software engineering focus to deliver the auto-drawing component as a bespoke solution, as well as expanding to Canada and the UK.
Shavit explained how, in talking with clients over the past year, the number one feature that they wanted to know more about was auto-drawings, and not so much about the magic at work in turning 2D sketches into fully detailed Revit models.
However, to understand how to make a drawing, Swapp needs the model data. “All of our processing is done on our system in the cloud. We have a plug-in for Revit, which does the export and the interoperability part, but nothing really happens on the desktop. It talks to our system and sends us the model, then we generate the construction documents from that data and bring it back into Revit drawings,” Shavit explained.
“We are working with architecture companies, and we essentially say, ‘You can think of Swapp like you outsource drawing production to a team in Belarus or in India, but our delivery times for drawings are constant.’ Meanwhile, our customers are saying that they are invoicing their customers sooner, by one month or two months. And for us, that’s the name of the game. We’re charging based on project size.”
When it comes to rules/template-based automation, Shavit told me that Swapp tackles the output standards problem from the opposite direction, as every office has its own standards, and even within firms, teams have different standards. Swapp treats the standardisation of output wider and at much higher resolution. There is no uniform Swapp standard. It is derived from past projects, to
22 www.AECmag.com March / April 2024
generate drawings that are compliant with the specific customer’s way of doing things. This can start with Swapp looking at its past projects (both the BIM and the drawings), or it can be trained on projects currently in the works.
Shavit readily admitted that the documentation that is generated is not 100% complete. There are always special cases, special fire requirements for certain parts of the building and other external requirements. Documents will still need to be put past a senior architect for liability issues. But he claimed the company is trying to perform between 80% and 95% of the most tedious, time-consuming work, enabling skilled staff to focus their efforts on other aspects of projects.
At its heart, Swapp is a consultative, bespoke auto-drawing solution that uses AI. Company executives are keen to state that it siloes every client’s data and AI into their own results. From this, it sounds as if Swapp has clients already using its technology, even though it’s hard to understand what they do from the company’s website.
It will be interesting to see how Swapp’s output quality, speed and ‘completeness’ compares to Autodesk’s own in-Revit auto-drawing features (more on this ahead) when they eventually ship.
Adi Shavit will be speaking at AEC Magazine’s NXT DEV conference on 26 June 2024 at London’s Queen Elizabeth II Centre (www.nxtdev.build)
Established runners and riders
At Bentley Systems’ Year in Infrastructure 2023 event, chief technology officer Julien Moutte talked at some length about the possibilities of combining AI and digital twin technologies in the infrastructure sector. But in a side meeting, he also discussed Bentley’s interest in applying AI to design, introducing the concept of an AI ‘co-pilot’ and applying the technology to create automated drawings.
“What we’re looking at right now is how can we automate the production of drawings for site engineering first, because there’s a lot to be done there? And how can AI models be trained to look at past designs and past drawings and try to understand style, requirements, layout etcetera, and try to automate some of those drawings?” he said.
As has been previously mentioned and will likely come up again in this article, the core issue is having all information upfront to help the auto-drawing system.
If you have good, accurate models, it makes the process much easier. Where the input is low-quality models, the output will be low-quality drawings – and that represents a threat to the whole idea of automated drawings.
Said Moutte: “Producing drawings for any kind of design is a challenging task, because you need to have a lot more data first to learn each kind of drawing that you will need to produce for all the different disciplines. You need to understand each domain’s requirements, because a structural engineer, an architect, a mechanical engineer, an electrician have different needs from drawings. Also, to start with, you need to have a model with the necessary level of detail to produce those drawings.”
At the time of writing, we have no known delivery date for Bentley’s AI auto-drawings capabilities, but the company seems to have been working on it for quite a while. Bentley is watching emerging BIM and AI start-ups closely and is once more actively investigating the broader AEC market, looking beyond its infrastructure flagship products and its fascination with digital twins.
Julien Moutte will be speaking at AEC Magazine’s NXT BLD and NXT DEV conferences on 25 / 26 June 2024 at London’s Queen Elizabeth II Centre (www.nxtbld.com)
When it comes to Graphisoft (www.graphisoft.com), the developer of Archicad, the leading BIM brand within Munich-based Nemetschek, a lot remains to be seen. The parent company is a notorious bunch of secret squirrels when it comes to R&D. I have been told that the company is indeed working on automated drawings using AI, which would indicate a cloud-based approach. But there are also rumours of the company doing the inverse, taking 2D and making 3D models, which would be a neat trick and would open up possibilities when it comes to retrofitting older buildings, so long as surveys and drawings match.
Since Graphisoft is the number two global BIM tool, auto-drawings will be necessary as a defence mechanism for its Archicad user base. But then Nemetschek, with so many BIM riches, also has AllPlan and Vectorworks to worry about and ensure they also get that capability. Again, no timeline, just a few acorns.
At Autodesk, home of Revit, AutoCAD and LT, meanwhile, an arms race that centres on auto-drawings is probably not a primary concern when you have a huge
number of desktop applications that need moving to the cloud.
In November 2023, the company added a strategy for automating documentation in the ‘Where Are We Going’ section of its architecture roadmap, which categorises work as ‘in progress’, ‘next’, ‘on radar’ and ‘launched’.
The aim is to get rid of repetitive and mundane documentation work, but it predicates this with work that needs to be done to improve the modelling capabilities of Revit to make a higher quality model to drive through the auto documentation process. This means increasing the level of detail and accuracy and delivering better organised metadata. In the ‘on radar’ category, Autodesk has Automated Documentation Workflows from July 2023.
The question is, where will this capability reside? Will it be found within Revit or, as is seemingly the way, will it be presented as a cloud service, maybe paid for with tokens?
If it’s not available ‘in the box’ as part of Revit, then customers might as well use any of the other automated drawing tool services that are due to hit the market. While Revit development is in rear-guard mode, with no next generation planned and Autodesk Forma being developed on the cloud, auto-drawings needs to happen within the app, if Autodesk is to protect the user base from fragmenting and going elsewhere for drawings.
Will auto-drawings be linked to the model and will they update with changes, or will they have to be rerun in a transactional way? It also matters what kind of percentage of drawings get automated. Not enough, and Revit is an expensive tool to use for finishing off a drawing that could just as well be created in LT. And if there’s no associativity with the model, will other auto-drawings offerings do a better job of this?
Personally, I am unsure if procedural AI will be used to deliver this. Given that Autodesk AI is a major focus of corporate marketing at the company, I suspect it will use that technology. If you have to select an output from a configurator, it probably won’t be AI. If the technology requires scans of past drawings, that would be AI and quite exciting, as it would learn your preferred layout style.
A new breed of disruptor
We are living through one of the more interesting times in software development for AEC. The tech stacks that firms
23 www.AECmag.com March / April 2024
Cover story
have developed and the tools that professionals use are all in a state of play – even if, to those using them, it seems like absolutely nothing has yet happened.
Historically, we are used to seeing seismic shifts as hardware and software change, as seen in the shifts from minicomputers to PCs, and from UNIX to DOS to Windows.
In each of these platform changes, software companies have risen and fallen. The big bet over ten years ago was the cloud, but efforts made in MCAD by Autodesk (Fusion) and Onshape (former Solidworks team) to ‘kill’ market leader Solidworks haven’t worked out the way they were planned. Now, all the MCAD players offer, in some form or other, ‘cloud’ versions (and there is an AEC Magazine article coming on that soon), but desktop versions continue to rule.
While we undoubtedly need new code streams for BIM, as provided by Snaptrude, Arcol, Qonic, Hypar, Augmenta and so on, there are technologies that would once have been essential parts of a desktop application that will be entirely delivered as services on the cloud. And these services will aim to punch out entire sections of today’s AEC design-to-documentation workflows through automation, whether that be procedural or AI.
well be approaching peak seats sold in the industry, because everything developed from now on will be about automation and accelerating design-to-documentation, reducing AEC firms’ reliance on the utilisation costs that came with the current generation of BIM 1.0 tools.
that auto-drawings are going to be available in pretty much every tool out there, both old and new. But there will doubtless be differences between them, in terms of capability, cost and how they are hosted.
Gräbert will be the main technology provider for its range of licensees: DS Solidworks, Draftsight, Snaptrude, Onshape, and Trimble.
‘‘ We are working with architecture companies, and we essentially say, ‘You can think of Swapp like you outsource drawing production to a team in Belarus or in India, but our delivery times for drawings are constant Adi Shavit chief science officer, Swapp ’’
While this is interesting news for AEC businesses, and a potential issue for an industry wedded to charging billable hours, it’s also a big headache for AEC software developers, especially those incumbents that have built vast, multibillion-dollar empires of selling tools through utilisation.
If drawings become increasingly automated, how many AutoCADs or AutoCAD LTs will firms actually need? If customers are no longer reliant on their BIM tools for documentation, why tie up an expensive BIM seat just to provide drawings (whether they’re automated or not)?
One vendor described this as ‘a moment before the storm’, where we may
Taking Autodesk as an example, despite its portfolio containing well over 30 products, there are really only four commonly downloaded tools in AEC: AutoCAD, Revit, 3ds Max and Navisworks. Any software firm that manages to deliver good quality, reliable auto-drawings from a BIM model, could seriously impact the need for so many seats of AutoCAD and Revit. We have already seen what AI can do for automatic rendering of models (see the amazing work of Tim Fu at www.timfu.com). That really only leaves Navisworks – and to me, this is a product that is half stuck on the desktop and half in the cloud, and it faces plenty of competition.
In short, auto-drawings represent a clear and present danger to the software status quo. It’s no wonder that both Autodesk and Nemetschek have R&D projects underway to develop and incorporate auto-drawing capabilities in their BIM tools, in the hope of protecting themselves against new market entrants.
However, while this may make current users happy, especially when it comes to productivity gains, there could well come a point where excellent auto-drawings are available for pennies/cents per drawing via a connected service, meaning expensive BIM seats for documentation make no sense. It may even be possible to stay in Rhino or to use more capable modelling tools than the previous generation of BIM tools.
Conclusion
Auto-drawings is not a matter of if, but when. It’s also a question of establishing which offerings are any good. There are so many firms now researching this area
Bentley, Nemetschek and Autodesk all appear to be adopting the AI path in their R&D for their own products, but I expect Bentley to offer a cloud service to all. Meanwhile, Swapp is taking more of a bespoke approach and will create an auto-drawings system bespoke to your firm’s particular needs. Plus, there are others, such as BricsCAD, which has also been developing something, but for now, company executives remain a bit quiet about its progress.
As far as timelines are to be considered, Gräbert has code with developers already, and Swapp has demonstrated some capabilities from its cloud AI and has customers now. However, I think we are looking at two to three years before we have a choice of mature, tried-andtested auto-drawings technologies on the market, either in your BIM weapon of choice, as an online service, or as a bespoke customisation.
BIM 2.0’s killer feature is not going to be cloud; it’s going to be automation. Auto-drawings won’t just improve the speed of drawing production. They will ultimately mean fewer skilled people being tied up in documentation. If used well, this technology will also mean fewer licences of expensive CAD and BIM software needed in each firm. However, I am sure there will be a new business model from software developers eager to ensure that revenue isn’t lost in return for the productivity gains delivered. The good news for customers is that fierce competition means there will be plenty of options.
More auto-drawings at NXT BLD & NXT DEV
If you are interested in autodrawings and the associated products currently in development, please join us at our NXT BLD and NXT DEV conferences at London’s Queen Elizabeth II Centre on 25 / 26 June 2024. You will have the opportunity to meet many of the people quoted in the article and take part in a discussion on what the industry wants from these new capabilities. www.nxtbld.com | www.nxtdev.build
24 www.AECmag.com March / April 2024
Cover story
An end-to-end visualization pipeline.
Introducing the Chaos Bridge designer-to-artist unified workflow.
Seamlessly transfer Enscape projects into V-Ray, saving time while maintaining creative and design decisions. What begins in CAD can now be fine-tuned with DCC tools and rendered in V-Ray.
Connecting early design and final production
Streamline compatibility and boost collaboration between architects, designers, and visualization experts throughout the entire archviz workflow.
Hardware limits erased
No hardware limitations. Render with CPU, GPU, and Hybrid CPU + GPU mode. Or effortlessly offload tasks to Chaos Cloud rendering.
Scan the QR code to read more about the Chaos Bridge:
The speed of real-time & the accuracy of photorealism
Make real-time design decisions with Enscape, switch to V-Ray for photorealistic images — and break down boundaries between CAD and DCC.
Unlock a world of assets
Make use of over 5,000 assets via the Chaos Cosmos Library. Customize the colors and materials of V-Ray 3D assets.
enscape3d.com chaos.com
Achieving sustainable building solutions at scale
By Marta Bouchard, AEC Sustainability Strategy, Autodesk
Fifteen years ago , I thought I’d achieved my professional dream–one that united my design education with my environmental values–providing analysis and consulting to AEC professionals to create high-performance buildings and reach their sustainability goals.
But working on one building project at a time wasn’t creating enough impact. It was satisfying but slow, and our industry needs to act fast, at scale, to mitigate climate change. To have an impact, we must transform our entire system and empower every team member to leverage the technology and tools that champion more sustainable design.
So, I joined Autodesk to work with product teams to scale sustainable outcomes for our AEC customers.
We see evidence of climate change in our planet’s everworsening floods, droughts, heat domes and blizzards. It threatens communities and is becoming part of the decision-making process for building design and renovations. The built environment is responsible for roughly 40 percent of the world’s annual greenhouse gas emissions. We must mitigate emissions of our existing built environment while also planning and building more efficient, climate-resilient structures for two billion more people in the next 30 years. To do so, we need to disrupt business as usual and design, renovate, and build smarter.
Leveraging and democratizing technology is critical to scaling sustainable solutions. Innovative workflows powered by cloudconnected tools that include AI enable us to design for sustainability from the first glimmer of a project idea all the way through the operation phase.
plays a role in ensuring sustainable outcomes, driven by strategies such as resilient design planning, decarbonized design and operational efficiency.
Making the most of what already exists
For a modern take on adaptive reuse with Autodesk Forma we can look to the new 311 Third Transformative, an office renovation project by and for Lake|Flato, a San Antonio-based architecture firm. Forty years ago, David Lake and Ted Flato founded their architecture firm in a 1920 building originally designed as a car dealership. By 2019, Lake|Flato had taken over the entire building and outgrown it. Rather than demolish it, or build new, Lake|Flato took its own advice: reduce carbon emissions by reusing what exists.
We’vereached atimewhenour customers are compelledtocreate moresustainable projects,andour governmentsare demandingtheycurb climatechange.
Autodesk Forma, for example, offers real-time AI-enabled analysis of how earlystage design decisions will impact a building’s performance. Its interoperability and extensibility allow you to connect to detailed design and analysis tools downstream, keeping key design data synced along the way. We are not far from a future where every project member who touches a digital tool
The embodied carbon associated with a renovation and reuse project is typically 50% to 75% less than new construction. The International Energy Agency tells us that to reach net zero emissions by 2050, not only will end-of-life buildings need to be replaced, but nearly 20% of existing building stock will need to be retrofitted and ready to use a green energy supply.
“You can always find ways to celebrate historic buildings and make them into something really beautiful and unique,” said Lake|Flato Design Performance Manager Kate Sector. “It felt really important for us to be able to demonstrate that to our clients.”
Just looking at the Lake|Flato project inspires delight. Sustainable design principles underpinned the renovation of both the building and site, realized by applying analysis tools to optimize the design process. The project preserves a piece of architectural history while embracing connections to the outdoors and responding to workplace trends of hybrid work. What was once a parking garage is now a four-season outdoor garden courtyard and collaboration space, with deconstructed wood
from the garage reused throughout the design.
Lake|Flato used a laser scan to create a precise 3D model of the existing building. That model helped calculate how many linear feet of wood could be reused. And when historic preservation requirements meant some additional windows could not be included, Lake|Flato loaded the model into a mixed-reality headset, proving to its team that there would still be plenty of daylight and views with many of the original windows.
Tools like Autodesk Forma helped the designers analyse the occupant thermal comfort around the project from the earliest stages of the design process. Its predictive analysis features can simulate conditions and provide design insights into the dynamics between a building and its surroundings in near real-time, while designing.
Forma helps designers adapt to environmental factors such as daylight potential, wind patterns, and microclimate effects. Using Forma to analyse their office renovation, Lake|Flato was able to create a functional and comfortable outdoor courtyard with shade structures, ceiling fans, and trees. Employees now meet and lunch outside even when temperatures hit 100° during Texas summers.
As the design process becomes more detailed, Revit enables further sustainable design evaluation, including operational energy assessment and predictive analytics that indicate the embodied and operational carbon impacts of design decisions. Built into Revit are EnergyPlus and OpenStudio from the US Department of Energy (DOE) and National Renewable Energy Laboratory (NREL), both of which assist with building energy modelling and analysis. Those capabilities and the third-party Revit plug-in Tally helped with the project’s ILFI Zero Carbon certification goal.
Sustainability gains traction
For years, sustainability in the built environment inched forward with a series of carrots and sticks. Building owners and investors discovered incentives to design and build more sustainably. Their reasons ranged from saving money to caring about the environment to preserving a legacy for future generations to earing a building label like LEED. Governments instituted ever-tightening regulations for the same reasons. These incentives and mandates kept building on each other, creating a momentum toward sustainability that’s becoming our new normal.
We’ve reached a time when our customers are compelled to create more sustainable projects, and our governments are demanding they curb climate
content
Sponsored
change. I believe a better future is in reach. For example, if all the G7 countries and China adopted sustainable strategies, we could cut greenhouse gas emissions from the materials used in residential buildings by 80 percent in 2050.
Technology gives us the tools to achieve sustainable design and construction at scale. It gives us the ability to predict, simulate, and benefit from data-driven insights. It helps us compare and toss out bad ideas and assess trade offs between good ideas. It lets us harness the expertise of every person in our project team, fosters an integrated design process and enables them to work with shared data and ideas. It lets us start with the end in mind – the project’s environmental impact. Through technology-powered collaboration, we can deliver better outcomes for our planet and future generations.
(Left) 311 Third Transformation, San Antonio, Texas, Lake|Flato.
(Below) Lake|Flato used a laser scan to create a precise 3D model of the existing building, then used Autodesk Forma and Revit for design and analysis.
Sponsored content
(Above) Sustainable architecture firm Lake|Flato’s adaptive reuse project transformed a 100-yearold building into new, designforward headquarters. Image Courtesy Robert G. Gomez.
Image Courtesy Robert G. Gomez.
NXT BLD 2024
Learn more about AI, automation, digital fabrication, BIM 2.0, data specifications, open source software, open formats, automation, and lots, lots more
Throughout the year, the process of finding and talking to innovators in practice, together with software developers and investors gives us a deep insight into the current state of design and construction and what kind of trends are going to evolve into workflow-changing technologies. AEC Magazine’s NXT BLD is the event where you can expect to be invigorated, inspired and informed by people and firms who are pushing the boundaries.
This year, the main themes of our dual stream conference focus on AI, automation, large-scale digital fabrication, BIM 2.0, data specifications, open source software, open formats, mixed reality, rapid reality modelling, the convergence of infrastructure and GIS and accelerated design.
Speakers can be found at www.nxtbld.com and these will be added to over the coming weeks, together with a full programme. In the meantime, here are some highlights.
Dale Sinclair, Head of Innovation, WSP
At AECOM Sinclair developed an interest in his architectural team being able to ‘speak’ the same language as the offsite fabricators that they were working with. This led to an evaluation of Autodesk Inventor and digital fabrication technology. In his new role at WSP, he has delivered a process for design teams to use software which was primarily aimed at manufacturers, to produce buildings. Sinclair will talk us through his unique methodology to drive project delivery using offsite manufacturing.
Chantal Matar, architect, artist Matar uses cutting edge technology for generative design. Working with AI tools like midjourney, neural networks with NERFs she explores future cities and new possibilities. Her inspirational work has been driven by technology but still retains beautiful organic, kinetic forms that look sculpted and other-worldly.
Bruce Bell, founder, Facit Homes
For years Bell has been profitably designing and digitally-fabricating individual residences for his clients using Revit and on-site digital fabrication. With the offsite fabrication world being in turmoil and construction firms hitting an all time high of insolvencies, Bell has been devising a way to upscale his technology to build housing estates. Learn more about this journey on page 46
Greg Schleusner, director of design technology, innovation, HOK Schleusner returns to provide an update on the development of an independent data backbone for the AEC industry which would create for projects using open standards. We believe there will be some demos. If we’re lucky, and he can find more cat emojis, he might also enlighten us on his drive within the industry to get automated drawings as a potential huge productivity benefit for everyone.
Marc Goldman, AEC industry leader, Esri GIS, BIM, AI, reality capture, digital twins and the metaverse are colliding in digital space. While GIS data is converging with BIM project data, there are additional technology layers in development to extract as much real-world information as possible from video, still images, professional survey photos, and LiDAR to build digital twins of reality. The metaverse is coming !
Antonio Gonzalez Vegas
Vegas was one of the driving forces behind IFC.JS, an atomised and streamable IFC technology from which BIM data can be openly shared, at the component level. He has since gone on to establish “That Open Company’, which is attempting to build a completely open source, open format, cloud-based BIM modeller. From what we have seen so far it looks extremely quick and with a beautiful interface. We hope the first build is completed in time for the show. No pressure!
NXT BLD 2024
Tuesday 25 June 2024
Queen Elizabeth II Centre
Westminster, London www.nxtbld.com
28 www.AECmag.com March / April 2024
www.nxtbld.com London 25 June 2024
Chantal Matar will explain how she uses generative design and AI tools like midjourney, neural networks with NERFs to explore future cities and new possibilities in architecture
NXT DEV 2024
NXT BLD is followed by NXT DEV (Next Development), an opportunity for AEC firms, software developers and financiers to help shape the future of AEC software
This is NXT DEV’s second year. With so many new software startups focusing on the AEC industry, finally looking to deliver next generation tools, we decided to create a day where we could try and help shape the direction of these formative tools. The event brings together AEC design directors, practice managers and internal heads of research from practice, together with software developers, venture capitalists, and enterprise investors.
It’s an opportunity for the industry to learn from and talk with innovative software developers, and see new software products at the formative stages of their development. It’s also a chance to tell developers and those who fund the next generation of tools where the current holes and bottlenecks are in today’s AEC tech stacks.
In the past we have seen tsunamis of software development in one particular area, like conceptual design, producing scores of different solutions, while failing to address broader issues which need solving. By giving the software industry an insight into the areas of opportunity, we hope to expand and broaden the range of tools that get funded across modern digital workflows.
The day is arranged a around topic areas for presentations and group discussion on key technologies which are going to impact design, project management, collaboration and digital fabrication. In addition to the bits and bytes, NXT DEV, delves into the business models and investment of software procurement by AEC firms, as well as how start up companies get funding.
NXT DEV is a unique global event. The success of last year’s conference has seen huge interest from key players in the industry, including existing leading software firms, and broad venture capital
firms, industry analysts who in the last year have been turned onto the opportunities in next generation AEC tools.
NXT DEV offers great opportunity to meet key people who are defining next generation workflows and products in the AEC space. For software firms NXT DEV provides industry exposure and the opportunity to meet people responsible for strategy and be seen by those in the mergers and acquisitions market.
What to expect
Below we highlight some of the key topics up for discussion at NXT DEV 2024
Artificial Intelligence (AI)
No self-respecting technology conference can be without a dose of artificial intelligence. The reality is we have some way to go before AI makes a true impact beyond ChatGPT and image creation. We will have developers and AEC firms discuss the issues that arise around AI in practice.
Openness
This is something that’s on the lips of nearly every software developer and major practice. The AEC industry needs data to flow like water. We need more capable open standards to ensure AEC information always belongs to the customer. As the world moves from files to databases from desktop to cloud, we can’t get locked into proprietary formats again, which undermine the collaborative nature of the industry.
Automation
Big changes are coming to workflows as automation tools, whether procedural or powered by AI, will start to deliver big ben-
efits in productivity by removing humdrum tasks. For example, on page 18 we cover the imminent rise of automatic drawings and documentation. How will this change workflows and what will be the impact on software, people and the bottom line?
Pricing, licensing and business models
The industry has moved from perpetual licensing to perpetual licensing with a maintenance fee, to subscription. What’s next? This is the opportunity for the consumers of software to tell the software industry what business models they like and dislike. To explain what works and what doesn’t work. As automation comes to workflows, providing big productivity gains, how should software developers charge their customers, when software priced on utilisation becomes meaningless?
GIS / reality capture and meta world
While we have seen the worlds of GIS and BIM combine, technologies such as AI and rapid reality capture from technologies such as SLAM and photogrammetry, look set to create a digital twin or metaverse of reality, in the not too distant future.
With Google and Cesium now automatically producing city models, what are the opportunities for this meta reality in the professional AEC space?
NXT DEV will explore a number of other hot topics and there will also be a tonne of new start up companies to see.
NXT DEV 2024
Wednesday 26 June 2024
Queen Elizabeth II Centre Westminster, London
www.nxtdev.build
29 www.AECmag.com March / April 2024 London 26 June 2024 www.nxtdev.build
Get your tickets now Tickets for NXT BLD and NXT DEV are now available. They include full access to the conference and exhibition, refreshments, lunch and drinks at the networking reception. Discounted tickets for both events are also available www.tinyurl.com/BLD-DEV-24
Enscape: building performance analysis
Chaos recently gave a glimpse into the future of its real-time visualisation software Enscape, with a new module designed to give instant visual feedback on building performance. Greg Corke caught up with the company to find out more
Enscape by Chaos has become a firm favourite with many architectural practices. The real time viz tool transformed architectural visualisation by making it an integral part of the design process. It allowed architects to go from BIM viewport to visually-rich real time environment – on desktop or in VR – at the push of a button. Any changes made to geometry, materials, or lighting inside Revit, Archicad, Vectorworks, Rhino or SketchUp are reflected in Enscape almost instantly.
By getting highly realistic visual feedback in real time, architects can make informed decisions at every stage of the design process. It can help give answers to many questions. What will the façade look like if clad in aluminium? Will the glazing provide enough light for the interior? Would the building benefit from a larger atrium?
Chaos is now adding a new dimension to Enscape with a view to ‘democratising sustainable design’. It is applying the same principles of real time visual feedback to building performance analysis, to dramatically reduce the time it takes for architects to optimise energy efficiency in their designs.
in real time. So, as you change the size of the window on the south facing side, you [instantly] see the impact.”
As a visualisation company, Chaos doesn’t have expertise in-house for building performance analysis, so it’s partnering with one of the industry’s most respected software providers, IES. Chaos describes the analysis engine being used as ‘trusted and certified’.
To make building performance analysis widely accessible and ‘architect-friendly’, ease of use is critical, as Petr Mitev, vice president, solutions for designers at Chaos explains, “We’re working very closely with IES to basically strip out as much as possible from the necessary inputs, in order to still get something actionable and valuable to the architect, without them
‘‘
approach with the Building Performance Module. “We want to make sure that we’re not creating or separating these sources of information,” says Mitev.
He admits that in some cases an additional layer of detail may need to be added within Enscape. “Let’s say you’re working on a renovation, what year was it built in? Obviously, Revit doesn’t know, so then we have to put that in on our side. But, for the most part, we want to maintain a single source of truth for our users.”
A simulated future?
Enscape’s Building Performance Module marks an interesting new direction for a product that redefined architectural visualisation by putting simpleto-use tools into the hands of architects. The question is, can it do the same for building performance analysis?
When changes are made in Revit, Archicad and others, Enscape will dynamically update [the building performance analysis data] in exactly the same way it does for visualisation ’’
having to take on the complexity that a building engineer would take on.”
Enscape’s forthcoming Building Performance Module will allow architects to visualise things like daylight, energy usage, comfort analysis and more, as they are designing. The idea is to help architects see the impact of every design decision from the very early stages.
“Enscape revolutionised the use of realtime visualisation,” said Kam Star, chief product officer at Chaos. “We want to do the same with building performance, with energy use, with thermal comfort, with daylight analysis – eventually carbon footprint and HVAC – all of those things –
“We want people to be able to trust what they’re seeing,” adds Star. “Of course, we don’t expect you to be able to get the certification straight out of Enscape – there will be a further step –but you will get to within striking distance, and certainly have that real time feedback based on your decisions.”
A single source of truth
Enscape is different to many other visualisation tools in that it maintains a ‘single source of truth’ within the CAD or BIM tool. Materials, geometry, and lighting are all controlled in Revit, Archicad and others, while Enscape updates automatically.
Chaos is taking the exact same
The timing is certainly perfect, as Star explains, “We know that there are changes coming in legislation across various parts of the world that will make it [building performance analysis] a statutory requirement.
“We know this will become something that you just have to have – it’s no longer a nice to have – and we want people to be able to make better design decisions based on those.”
Of course, the AEC industry already has a plethora of tools that can be used for this task. These tend to fit into one of two camps – standalone concept design tools that offer real time building performance feedback on simple models, or more powerful applications that either work inside BIM authoring tools, offer a live link to them, or get data out via plug-ins or the Green Building XML (gbXML) schema. cove.tools, Digital Blue Foam, IES Virtual
30 www.AECmag.com March / April 2024
Environment (VE), Pollination and Sefaira are some examples.
As far as we know, there’s nothing else on the market that looks to connect detailed BIM models with building performance analysis in such a seamless way, as Chaos describes. When changes are made in Revit, Archicad and others, Enscape will dynamically update, in exactly the same way it does for visualisation. The idea is that architects can get instant feedback at all stages of the design process. And because it uses the BIM model as a single source of truth, it should provide a flexible pathway into more detailed analysis.
Of course there are still many unknowns about the forthcoming product. How useful will the analysis data be? How easy will it be for architects to interpret? How much will it cost? Will it provide a seamless workflow into more detailed analysis?
This new fork in the road doesn’t have to end with building performance analysis. We asked Chaos, if Enscape in the future could handle other forms of analysis, such as structural, wind or acoustic? In other words, could it become a multipurpose window into a BIM model to give instant visual feedback on many different aspects of a design?
Mitev replied: “Our strategy has always been to be a tool that enables design decisions, not design artefacts. Without committing to something concrete, we really want to look at things that help people make better decisions faster.
“And if, down the road, what we hear from the user community means structural analysis or, I don’t know whatever else it might be, then that’s something we need to consider very seriously if it’s critical to the design decision making process.”
Enscape’s Building Performance Module is due to enter beta/preview mode in the second half of 2024.
■ www.enscape3d.com
31 www.AECmag.com March / April 2024 Software
Enscape & V-Ray - a collaborative future
Greg Corke caught up with Chaos to learn more about its plans for enhancing workflows between Enscape and V-Ray, embracing real time collaboration, and giving architects powerful new story-telling tools through Project Eclipse.
Two years ago, Chaos, the developer of V-Ray, merged with Enscape to create a single entity to cover all aspects of visualisation from real time to photorealism.
V-Ray and Enscape remain entirely separate products, with extremely passionate sets of users. The message is still clear: Enscape is for design decisions while V-Ray is for the highest tier of visual quality.
One of the key aims for the new look Chaos was to bring both applications closer together, to deliver a seamless end-to-end workflow for architectural visualisation.
Autumn 2022 delivered the first fruits of this integration, with a bridge between Enscape and V-Ray. The idea is that users can work in the best tool for the job without having to recreate each asset from scratch.
Within a single BIM environment, users could start in Enscape and then render in V-Ray at the click of a button with materials mapped accordingly. Then at the tail end of 2023, a V-Ray Scene (.vrscene) could be exported from Enscape so a viz artist could instantly pick up in a tool like 3ds Max, where an architect left off in Revit.
as an architect, Mitev explains how a viz artist might add additional lights in V-Ray because interior renderings benefit greatly from being well lit. The problem is, he says, is that once you’ve shown the client that rendered space and they like it, the architect now has to bring that same lighting back into Enscape.
Another way to improve the workflow, is through what Mitev describes as ‘quality of life’ improvements. For example, sometimes a VRayScene export is too large or sometimes a certain asset or material doesn’t translate correctly. “We want to improve the handling of specific Enscape materials,” he says. “If you’ve ever seen our grass or our water, sometimes when users make the switch to V-Ray in that connection, they’re kind of
‘‘
Embracing collaboration
Like most real time visualisation tools, Enscape is well known for being GPU hungry. To deliver a fluid real time experience it requires a powerful graphics card, especially at high-resolutions and with enhanced realism, including ray tracing.
Of course, not everyone has access to a high-end GPU, so this limits the reach of Enscape’s real-time rendering, both within an architectural practice and with other stakeholders, including clients.
Enscape is addressing this on two fronts: one by optimising rendering so it is accessible on lower powered devices and two by using pixel streaming, where the graphics processing is done on a local server or in the Chaos Cloud using high-performance GPUs for the best visual quality and interactive experience.
We want to get into a pattern where a certain model or a certain texture, whatever it might be, has different levels of detail
Petr Mitev, VP, solutions for designers, Chaos ’’
surprised that it looks different.”
While better translation will help in the short term, the longer-term goal is to serve up all materials – for Enscape and V-Ray –through the Chaos Cosmos content library.
Petr Mitev, vice president, solutions for designers at Chaos, admits that this is just the beginning. The workflow can still be improved in many ways. One way is to add bi-directionality, as he explains, “If you’re iterating throughout the design process, it could be that the 3D artist makes more progress and then the architect has to then reconcile that.”
Drawing from his personal experience
“We want to get into a pattern where a certain model or a certain texture, whatever it might be, has different levels of detail,” says Mitev. “So, if you’re in Enscape, and you have this material, which is suitable for Enscape [real time] performance, but the moment you want to take it to V-Ray, then Cosmos can deliver a much higher resolution thing [asset].”
Of course, driving this development is a greater need for collaboration. As Kam Star, chief product officer at Chaos, explains, it’s becoming increasingly important to experience the scene in real time with others.
“The real value here is collaborative design review, and to be able to do it in a way that brings everybody together, whether you’re the client or the engineer or the architect, or even construction personnel on site,” he says.
“Being able to mark up, being able to handhold somebody, walk them through something, capture comments based on a particular location or a particular view and be able to converse around that.”
Chaos currently has some collaborative tools within the Chaos software suite. Chaos Cloud Collaboration, which comes from the V-Ray side of the business,
32 www.AECmag.com March / April 2024
allows teams to upload and mark up images and panoramas. As Star explains, that will provide the foundation for real time collaboration in Enscape.
“[Chaos Cloud Collaboration] already has all of the kind of tools that you need in order to set who the team is, what kind of access they’ve got, tracking, all those kinds of things. We’re leveraging that in our streaming solution, but also within Enscape over the course of the year.”
Mitev adds that the next stage of development is to hook up the collaborative annotation feature to Enscape and make it sync bidirectionally to Chaos Cloud. “Chaos Cloud can really become our collaborative platform, whether it’s 2D or whether it’s 3D, it doesn’t matter, but that should be the single point of truth,” he says.
In recent years, VR has proven to be a very powerful platform for design review. We asked Chaos if its plans for collaboration in Enscape will extend to immersive environments? “That’s the direction,” says Star. “With VR we recognise that some of the interfaces will be slightly different to what we’ve got right nowbecause you need them to be different. But the model we’re building, internally, we call it build once, use everywhere.
“If the customer has created something, they should be able to experience it on any device and be able to interact with it, without having to worry too much about, ‘oh no, I’m on Windows or Mac now. Oh no, I
only have my Android tablet today or Apple Vision Pro’. It’s just about being able to use and interact with things without [hardware] getting in the way.”
Story time
Chaos is working on a new ‘story-telling’ solution, codenamed Project Eclipse, that will allow architects and visualisers to ‘rapidly enhance’ scenes started in Enscape or V-Ray.
The standalone tool is designed to work seamlessly with exports from all Chaos rendering integrations and is fully compatible with the Chaos Cosmos content library.
Star describes Project Eclipse as a fully ray traced 3D assembly and animation application. “It’s aimed at architectural visualisers who want an easy tool to create really beautiful photorealistic animations,” he says, adding that you can enhance scenes with crowds, vehicles, traffic, and custom behaviours.
The software can also be used to present multiple design options to clients. “Here’s this option, and here’s that option, and let’s walk around it in real time,” he says. “Can I see it if the wall is here or there? Can I see it if the kitchen is moved? Can I see it at night? Can I see it with this colour, or can I see it with this material? You set up different options. You can set up the different views and you can then transi-
tion between them.”
Star points out that Project Eclipse supports workflows unlike those of competitive products, insofar as it preserves the visualisation effort that’s done in the 3D creation tool. “It combines the exports from anywhere Enscape or V-Ray resides, preserving the work done with them - so what you see in Enscape or V-Ray where you create is what you get in Eclipse,” he says. “It gives you the ability to assemble scenes from multiple tools,” adding that it will soon support neutral exchange formats such as OBJ, FBX, DAE and 3DS and eventually USD and MaterialX.
One of the aims of Project Eclipse is to enable architects to do the type of work that previously would have to be done by a visualisation specialist in a DCC application like Autodesk 3ds Max or a real time tool like Unreal Engine.
“You don’t need to redo your materials. You don’t need to break the link between, let’s say, Revit or SketchUp or Rhino. You continue to work in the way that you want, and these things are kept in sync. It’s really just simplifying the workflow. You don’t have to go and redo anything if you did it once, you create an option for ‘A’, then an option for ‘B’,” he says.
Project Eclipse is an internal code name. The asyet unnamed product will launch later this year.
■ www.enscape3d.com
33 www.AECmag.com March / April 2024
Software
Project Eclipse will allow architects and visualisers to ‘rapidly enhance’ scenes started in Enscape or V-Ray
Looq AI: smart reality capture Interview
A new integrated reality capture solution uses computer vision, AI and a proprietary handheld camera with GPS, to capture infrastructure at scale
Over the last ten years the AEC industry has been inundated with a plethora of reality capture technologies, from affordable laser scanners and SLAM devices to aerial and terrestrial photogrammetry - even iPhones with LiDAR.
Last month saw the introduction of a novel device and platform from Looq AI, a startup that claims to offer a ‘one-stop solution’ for surveyors, engineers, contractors, and asset owners to capture infrastructure, in minutes, with ‘survey-grade accuracy’.
Looq’s initial focus is on the utility sector, but there are also plans to expand into other industries. By making the technology incredibly easy to use, and fast at capturing real world assets, the aim is to make survey ‘integrated and continuous’ across engineering and construction.
The Looq platform is powered by what the company describes as a novel and fundamental computer vision and AI technology. The device is a proprietary handheld ‘Q’ camera, which combines four high-resolution cameras with ‘survey-grade’ GPS and an AI processor.
Data from the camera is uploaded to the cloud and automatically processed with Looq’s AI-based image-to-model software to create a high precision, sub-centimetre, and geo-referenced intelligent 3D ‘digital twin’, along with orthographic maps and AI semantic information.
Through the platform, pro ject teams can then interact with the ‘digital twins’ and AI generated information to complete a multitude of engineering, surveying, analysis, and coordination tasks. There’s a webbased application for visualisation, analysis, and collaboration. Data can also be exported to a varie ty of formats.
Greg Corke caught up with Dominique Meyer, PhD., CEO at Looq AI, to delve deeper into the technology.
Greg Corke: One of the key things about Looq that piqued my interest is that your AI technology includes ‘algorithmic processing that extracts critical assets, semantic and geometric features’. We’d love to understand more about that.
Dominique Meyer: At Looq, we’ve built a fundamental AI and computer vision technology that allows us to very quickly compute geometrically correct geometry with photometric accuracy just from images. The big question is, how do you guarantee that geometric accuracy and quality? And that is exactly where our AI comes in. We have intelligence in terms of what the scene sees, what those objects are. That allows us to extract different feature classes, feature assets, and feature intelligences to both guarantee that geometric accuracy, as well as to enable downstream applications math.
Greg Corke: So, you’re recognising features, you’re recognising that a telegraph pole is a telegraph pole, and by recognising the objects, that can help define how your AI processes the photogrammetry, to try and extract the right kind of object data from it?
Dominique Meyer: Exactly. It’s a very unique way of getting us to that capa-
saying, ‘hey, there’s a pole here’. It goes as far as saying, ‘hey this pole now has a set of features that may indicate that its material is ‘x’, or that its state of structural health is indicative based on a number of heuristics that are AI deterministic. Is it geometric tilting, is it colourisation from, for example, rotting or rusting?’ All these characteristics can feed into what we call asset intelligence, which we’re very excited by.
Greg Corke: Can you give more examples of the objects that it can recognise, or is trying to recognise at the moment?
Dominique Meyer: Today, we support a multitude of object classes . What you’ll see in standard urban and rural environments, those include streetlights, power poles, power lines, different types of curbs, ground vegetation, buildings, assets, any kinds of vehicles and people (that we will automatically remove), all the way into very specific engineering use cases.
We’re very focused on supporting the distribution, EV charging assets, utility environments, and that’s where we’re looking at conductors, we’re looking at insulators, fuses and all those things. And those provide real value towards the engineering companies that we support.
Greg Corke: Because your system is aware of the geometric shapes of these objects, when it’s interpreting data for something like a curb, would it know that there’s a straight line and maybe a curved edge, so it would process the data in that way, rather than generating a complex mesh?
Dominique Meyer: We do a level of intelligence and how we apply statistical filtering and statistical robustness to our algorithms. And those go beyond just ‘object fitting’, those go beyond just saying, ‘hey, you know, we’re going to make up that there is a curb here’. We drive every single bit of our output data from matching from intelligence on those algorithms by using very fundamental statistical methods and AI to get there.
Our output is a combination of different assets. A point cloud is one of them; statis-
34 www.AECmag.com March / April 2024
tical and object level models is another one. But we really want to make sure that the different use cases are supported.
I think the curb you touched on is a very important one, specifically for any topographic surveys for any urban environments that are designed, because certain environments you’re looking at need to be ADA compliant [standards for accessible design], so tolerances on a millimetre scale, accuracy, geometric consistency, top of curb, bottom of curb, back of curb. All those things are things that are important to the survey world, and we care about upholding those accuracies across all of our asset classes.
Greg Corke: Continuing the curb example, would the object comprise planar surfaces defined by four points or would it be a highly complex mesh, which would essentially represent the same thing, but obviously with a much more detailed data set?
Dominique Meyer: I’m going to say we support both. And the reason for that is because certain applications require fine levels of detail, and that means that if you’re going from the pour of a concrete edge of a curb into a plate, sometimes there’s a four- or five-millimetre jump. That cannot be modelled as a planar surface because you would not meet those geometric accuracies. But there are many, many use cases where you don’t need millimetre accuracy. And that’s where we do support a lower-level interpolation as well, that’s really enabling certain downstream applications, to make sure that there is integration into, for example, Civil 3D, and other pieces of software that are absolutely necessary in the industry.
Dominique Meyer: Exactly. One workflow within our platform is to create topographic survey reports. Another is exporting and integrating it into a 3D surface, for example, into some of the Autodesk tools, and that’s where we automatically do some down sampling to make sure we support integration and the use cases where you don’t need that full high density.
Greg Corke: In general, how does accuracy compare to things like laser scanning, iPhone LiDAR, handheld SLAM and other photogrammetry technologies?
Dominique Meyer: Accuracy is a very important one in our industry. We care about supporting relative accuracy, so point to point distance measurements, and we care about supporting global accuracies.
‘‘ We have developed a way to combine GPS accuracy, the GPS localisation, with incredibly robust geometric priors of our computer vision algorithm to get those accuracies that have previously just not been able to be attained at large scales ’’
Today, we see a lot of players working on increasing that resolution and capability of handling big data, and that’s one thing I think Looq is focused on, how do we foster that adoption towards Big Data, while also making sure [we enable] certain workflows that don’t necessarily support, terabytes and terabytes of geometric data?
Greg Corke: So, when the data is in your platform and analysed, you would always store the highest resolution model in there, and then you could stream out different level of detail models, as required, for each use case.
In our case, taking the photogrammetry and computer vision-only route, without supporting lasers in our core operating pipeline, has meant that we had to rethink how we guarantee those accuracies at scale.
Much of your audience may be familiar with the difficulties of large-scale photogrammetry and alignments. Often you see doubling, often you see noise characteristics. All of those have been addressed fundamentally at Looq and that’s how we guarantee that the data quality we put out is really up there.
If you look at iPhone LiDAR systems, they’ve made huge advancements.
I think the cell phone industry has pushed the frontiers on what’s possible in terms of 2D and 3D sensing or imaging at a price point that is incredible. But you also have to remember that with a phone-based camera and LIDAR system you will have a limited field of view. And with Looq we are focusing on capturing a lot of information quickly.
If you look at an EV charging site that has been built, and you’re doing a pass down the road and come back the other side because of traffic danger or risk, within two, three minutes, you’ve got it captured.
With an iPhone you’ve got to capture around, you’ve got to scan up and down, go into different areas, and that creates a burden on the capturer. You need a very rigorous technique, and training for your field technicians to be able to get that. At Looq, we’re focused on making data cap -
ture simple while guaranteeing the data quality and integrity.
Going a little bit into the other LiDAR SLAM systems, this is where, we’ve seen, sometimes issues on the global drift and that’s an inherently difficult thing to do.
When you’re aligning point clouds over distances, or continuous sampling of those LIDAR systems, you’ve got to have some geometric or motion estimation that allows you to align that.
With Looq’s computer vision algorithms, we’re able to have incredibly robust geometric priors and matching constraints that allow us to guarantee a space and ultimately reconstruct it locally and globally accurate, and that’s how we really hit those higher accuracies in the system and carry the trust of the industry.
Greg Corke: Can you explain more about your capture device? You’ve got a GPS sensor and four cameras, but where are they to give you a much wider field of view?
Dominique Meyer: We have one camera on each side, one on front and one on top, one on the left and right. That gives us a near 360 coverage, ensuring that you can see down below for ground capture, for topographic survey, and then also overhead for any overhead utilities and assets.
Greg Corke: And everything is being geolocated with the survey grade GPS receiver?
Dominique Meyer: What we have developed is a way to combine GPS accuracy, the GPS localisation, with incredibly robust geometric priors of our computer vision algorithm to get those accuracies that have previously just not been able to be attained at large scales.
Greg Corke: So, you would literally just turn up on site, and you could start scanning and it would be geo located accurately, without having to set up survey control points? It does all that through the GPS, where you are, the orientation, everything, and then the data is created and referenced relative to that?
Dominique Meyer: Exactly. We do have the opportunity for very specific use cases - and I want to list two here - to be able to import external control.
A lot of assets, specifically in city topographic surveys, require alignments to city monuments. Those are local coordinate systems set up by the city or utility to make sure that the deliverables are all consistently in that same coordinate system at a certain epoch. That is one use case where we do allow our customers to
35 www.AECmag.com March / April 2024
Interview
bring in that survey control, those survey control points, to be able to turn run a ground control alignment. Another one, of course, is in areas where you have limited GPS quality.
Greg Corke: I’ve noticed that there’s an AI processor on the device and then there’s more AI processing in the cloud. What does the on-device AI processor do?
Dominique Meyer: We’re doing hundreds of megapixels per second. And what that means is that you’re capturing a lot of data. And as you want to bring that data back to the cloud, you’ve got to, first of all, compress it and you’ve got to also decide how to best capture it. And that’s where our AI processor comes in. It identifies, based on your motion, based on the scene you’re capturing, how and when to best capture the data to ensure the best possible outcome in terms of the reconstruction, and then of course, the data efficiency.
As you know, being familiar with a lot of LIDAR systems, you often spend hours and hours capturing data, leaving you with hundreds and hundreds of gigabytes of point clouds. Manipulating and operating those in the cloud or locally is diffi cult. At Looq, what we’ve built is actually a proprietary way of also putting the data together into a for mat that is incredibly data efficient. And that means that pushing the data into the cloud becomes easy, it means that handling those files for processing becomes a lot easier. And that’s also a very unique way of thinking about how do we make that workflow even easier; how do we make that upload process even easier? So really encapsulating the most out of that and all that is powered by our on-camera AI processor.
Greg Corke: While still on site, how do you ensure you have good coverage, that you have captured all the data you need, so you don’t have to return later and fill in the gaps?
Dominique Meyer: That’s part of our ‘claim to fame’, getting someone new to using Looq up and running can be done in 5-10 minutes. So, being able to operate a simple capture system, powered by complex technology, means that now we can reduce that burden of integration and adoption and training for a lot of these engineering and survey companies.
The feedback we do provide on the system allows for a very quick way of saying ‘hey, the capture data path is good, the data quality we expect is good, the cameras were all working, and the data was successfully stored. Those feedback mechanisms ensure that all our customers happily capture at scale without having to worry that ‘hey, I didn’t see that, or I didn’t capture that, or we are
an important part - not needing 128 GB of RAM on your machine.
We want to make it accessible. And that’s why we prioritise that streaming architecture to make sure that anyone and everyone in an organisation can succeed with that tool.
Greg Corke: Are you working to integrate with other platforms, to offer seamless data transfer with something like Autodesk Construction Cloud or any of the Bentley platforms?
Dominique Meyer: Absolutely. We’re working on a number of integrations currently. We’re excited to be building those out and sharing when the time is right.
Greg Corke: Finally, Looq currently has a big focus on the electric utility sector. You’ve talked publicly about oil and gas, industrial automation, civil engineering, and real estate going forward. Is that a case of training the AI on the typical objects within those industries, in order to benefit most from the technology?
missing data because there was a system crash’. All these things are taken care of by the very, very robust underlying architecture of our technology.
Dominique Meyer: In part, yes. There’s the AI component, but there’s also the workflow component, there’s also the business strategy behind it. Today, we’re very excited by helping engineering companies in the electric utility space, succeed at transitioning the grid into a renewable grid, into a sustainable grid. And that’s really where our initial focus is.
A lot of the downstream applications in the CAD, engineering, operations aspects can benefit from Looq, and that’s where, after expanding past the electric utilities, we see great opportunities in different areas.
Greg Corke: So, if you had multiple photos, which were all the same, or of a similar orientation, it wouldn’t upload all of those photos, it would only upload the ones that are relevant for the reconstruction?
Dominique Meyer: Exactly. That’s the basis for data efficiency.
Greg Corke: How big are the datasets that are uploaded to the cloud?
Dominique Meyer: We’re talking gigabytes, where you might have seen tens of gigabytes, hundreds of gigabytes in the LiDAR world.
You have a live preview of the images being captured [displayed on the included iPhone], so you can see what the camera sees, pretty much. You can also see where you’ve covered on a GPS map.
Greg Corke: Once the data is in the Looq cloud and processed it can then be accessed and viewed through a standard web browser. As we’re talking about huge gigabyte datasets, is data streamed on demand and cached locally?
Dominique Meyer: It is all streamingbased. There is dynamic loading and caching in a web framework that allows fast and easy navigation. It also means that anyone with a computer and a web browser can access that data. And that’s
Oil and gas is a sector where traditionally there has been incredible demand for very high accuracy systems on the laser side, so fitting into there I think is super exciting. But then also going more into civil engineering and real estate that have a lot of structural components that we will be able to apply our AI capabilities to enhance engineering workflows.
A lot of the work in, for example, concrete estimation and structural integrity, right, some of it can be driven through image-based analytics and that’s where Looq really can very likely provide a lot of value in the future. And we’re excited about going in those directions as we grow the company and support the markets.
■ www.looq.ai
36 www.AECmag.com March / April 2024
Treble: sound advice Software
Icelandic developer Treble has created a suite of tools designed to analyse and optimise designs for acoustic performance, based on the founder’s own research into wave-based sound simulation analysis
While BIM adoption has primarily been driven by the need for coordinated documentation output, the AEC industry has always hoped to derive other model-centric benefits from this approach. In particular, the analysis, simulation and optimisation of designs has been seen as a significant potential source of competitive advantage.
With this in mind, more advanced 3D practitioners with considerable in-house programming resources have created their own suites of analysis applica tions. These firms include Foster + Partners, for example, as well as ARD, which presented on this subject at AEC Magazine’s NXT BLD conference last year (www.nxtbld.com) . Those compa nies that have taken this route insist that the insights they derive from such tools are especially use ful in the early stages of a design.
For other firms, many new concep tual tools now come with built-in analy sis capabilities that focus on environ mental conditions such as daylight, solar energy and wind. These tools, which include Autodesk Forma, provide quick feedback. The insights they provide may not be definitive, or offer specialist indemnity, but they are giving more firms access to early-stage, performance-based design.
Acoustic analysis, however, is an area where available tools are either highly specialist, developed in-house by very large firms, or both.
Treble aims to change that tune. Last year, the Reykjavik-based company came to market with a more accessible acoustic simulation suite that can generate, from BIM data, interactive, real-time, immersive audio-visual
plug-ins. The application identifies problems like reverberation, assesses speech intelligibility, privacy, privacy distance and distraction distance.
Recently, AEC Magazine got the opportunity to chat with Treble co-founder Dr Finnur Pind, and hear more about how this self-confessed ‘sound nerd’ turned his attention to the AEC industry.
based and uses a proprietary wave-based simulation technology, which offers new levels of accuracy, according to company executives. Geometry can be imported from SketchUp Pro, Rhino and Revit, via
Sound city
Reykjavik is an exceptionally musical city. It boasts lots of local bands and plenty of places to hear live music, not least at its annual Airwaves musical festival. Every November, live performances take over seemingly every square inch of the city, even its shop windows.
Dr Pind started his acoustic journey within the city’s lively music scene and studied engineering because he wanted to know how guitar amplifiers and other audio equipment work. This drew him further into the field of acoustics engineering, culminating in a doctorate in sound simulation technology. As part of his PhD research, he developed new algorithms capable of simulating sound faster and more precisely than previous approaches. And that, in turn, led to the founding of Treble.
“Our research was always kind of very
38 www.AECmag.com March / April 2024
practically oriented. We wanted to solve real-world problems, especially for the building industry, where people are always struggling with dealing with acoustics,” says Pind.
“As my PhD was concluding, me and my friend Jasper [Pedersen], who’s also an acoustic simulation guy, thought we could do something with this. We decided to apply for a grant, which we got, so we thought we had better start a company!”
Treble is currently fundraising for a Series A investment round, having already gone through a pre-seed and seed round and raised $10 million so far ($3 million in grants and $7 million in equity). Investors include the European Innovation Council and NOVA, the venture arm of building materials giant Saint-Gobain.
concert hall system). In fact, he adds, many audio tech companies use Treble to train or improve adaptive audio technologies that adjust to the spaces in which they are placed.
There are two core markets for Treble, as Pind sees it: users that want to build acoustic analysis for a particular sound system for a building, and tech customers who require simulation technology to synthesise a million different environments, in order to create data to improve
‘‘
ment was to make Treble simple to use and able to work seamlessly with other design tools. Going from BIM to Treble, performing an analysis and then getting the feedback can be done in almost one click. Treble extracts the 3D model and its materials, and then simulates how this will sound. Results are delivered as either coefficients, graphs or colour maps. Users can venture into the real-life space and listen to the predicted sound on headphones. They can flip between different design variations and materials and hear how this impacts sound quality.
Treble deliver results as listening experiences, giving designers and clients the ability to hear the sounds associated with a proposed design.
Hearing is believing, after all ’’
As Dr Pind explains, the Treble application takes a 3D model of an environment, into which the user can place sound sources, such as a human talking, or a loudspeaker. The tool takes into consideration the materials used in the model and then simulates how the source will sound in real life. It might be used for designing and optimising that space, or designing and optimising a sound system, from the small (a speaker phone, say) to the vast (a
their audio algorithms. “While these two applications seem quite disconnected, from our technology perspective, it’s all very related,” he says.
At the start of its development, Treble concentrated on serving the sound specialist community, but as the company grows, it is developing tools for architects working on spaces that might include a typical meeting room, an office layout, a call centre, or a classroom.
A key aim for the application develop-
“We are most proud of this simulation engine that we built, which is very precise. We’ve benchmarked it against many, many real-world scenarios and it’s so precise that it’s not just useful for building design, but also for product development and data generation, where you really need real-world fidelity. Our technology doesn’t use the typical ray casting approach. Instead, we solve the maths equations that describe sound propagation, capturing the wave nature of sound.”
Technology choices
So why did Pind choose to support SketchUp, Rhino and Revit? He explained, “SketchUp is used a lot by
39 www.AECmag.com March / April 2024
Software
acoustic engineers, acoustic specialists. That seems to be a common tool for them to prepare models for simulation. And then Rhino too, is so powerful, and used in early designs, which is a good place to think about acoustic performance. For generative design, we have an SDK/API, perfect for Rhino/Grasshopper to work with for analytical loops. We have done some internal tests with that kind of process, but the API is still relatively new. Some of our customers are testing this out. Revit for BIM modelling, of course, because it’s just so widely used.”
From the onset, he decided Treble should be browser-based and deliver its experience within the browser. This way, it’s possible to create virtual listening experiences, and then send them over the web. The person on the receiving end just opens it in their browser and can listen to the simulated acoustics from a set position within the model. This capability is powered by gaming engine technology.
Because these experiences can be shared via the web and delivered to headphones, you can have as many people as you want listening. Alternatively, the experience could be delivered to a speaker. While for now, these are screen-based experiences, virtual reality options are on the roadmap, leveraging Unity and Unreal.
The company has already been working with companies that are fiercely protective when it comes to data security, such as automotive manufacturers, and Treble complies with the highest cloud security certifications. It’s considered safe enough to persuade some of the biggest tech firms to upload some of their most sensitive designs - but if a big client wanted to deploy locally on its own server farm, for example, Pind says that could be considered.
Until now, Treble has supported the analysis of just one space at a time, with users having to pick representative spaces out of the building they wish to analyse. But that’s about to change with the imminent launch of what Pind calls ‘multi-space import’, where they can input a whole building and get a quick analysis of basically every space inside it. If problematic areas are
found, users might zoom in and iterate the design to solve the acoustic issue.
The software doesn’t yet offer advice on rectifying a design, but it clearly indicates where problems lie. It’s then the job of the designer to adjust the materials, the space, or the furnishings to then explore the impact on sound of these changes.
The core analysis isn’t AI-driven, either - but the company is looking to utilise some kind of machine learning to improve processes and efficiency.
Outside of the AEC world, Treble is already active in the world of ‘synthetic data generation’, where simulation is used to generate data, which is then used to train machine learning models. For Treble, this means providing analysed spaces for all kinds of modern audio equipment manufacturers. The resulting data sets feed machine learning algorithms that look to detect speech, remove extraneous noise, or improve acoustics in that space. For manufacturers, getting their hands on these training datasets can be a headache.
Treble, by contrast, can easily generate 10,000 meeting rooms, containing all sorts of surface materials, furnishing and occupants. According to Pind, this is becoming one of Treble’s biggest use cases.
Hearing is believing
Treble is typical of the way that many new software companies now come to market. Their solutions are written as services in the cloud, with plug-in and API access to relevant third-party tools.
But where it differs from other analysis tools that focus on early-stage design is that while many of these use nice colour maps to deliver indicative results, these may still mean little to a non-specialist. Treble’s answer is to deliver results as listening experiences, giving designers and clients the ability to hear the sounds associated with a proposed design. Hearing is believing, after all, and Treble looks like a useful tool to have in the BIM analysis armoury.
It’s available for a 30-day free trial that offers access to all features. From there, one seat for small and occasional users is priced at €149/month, with a token system deployed to pay for processing. A one-seat unlimited use bundle is €299/ month. For five or more seats, you’ll need an Enterprise bundle, which requires a call with the company to discuss needs. Discounts are available for yearly payment and special pricing is available for academics. Access to the Treble SDK is currently subject to a waiting list.
■ www.treble.tech
40 www.AECmag.com
Discover the latest technology to help you on the journey towards digitalisation Network with your peers to share ideas and experiences Learn from expert speakers and improve the way you work
REGISTER FOR FREE www.digitalconstructionweek.com Organised by @DigiConWeek Digital Construction Week @DigiConWeek INNOVATION IN THE BUILT ENVIRONMENT
digitally enabled
to
HEADLINE PARTNERS GOLD SPONSORS
Join innovators from across AECO to debate, discuss and share ideas to help build a more
industry. Register
attend DCW
TestFit runs free
TestFit’s first magic trick was laying out apartment units and complex parking lots in a matter of seconds. Its primary developer, architect Clifton Harness, was fed up with late nights and overtime spent manually working on such tasks. With the help of college friend Ryan Griege, he founded a company – TestFit – with the aiming of putting automation tools into the hands of architects that were just as sick as he was of some of the humdrum work associated with residential projects.
As it turned out, architects were slow to adopt the tools, mostly due to reasons of cost. But property developers took to TestFit like ducks to a water feature. It is this customer base that has guided development of the tools. (For readers unfamiliar with the scale and speed of TestFit, it’s worth following Harness on LinkedIn, where he regularly posts videos of his application achieving seemingly miraculous things.)
magic trick, TestFit is commonly assumed to be based on AI. In fact, it’s mostly not AI at all. But even without that ‘secret sauce’, customers generally achieve two to three times as many design iterations per project, the company boasts. And site planning times are around four to ten times faster.
To a European mindset, the kind of multi-family housing on a massive scale that typifies the projects underway at TestFit’s US-based customers may seem alien, but the company still aspires to conquer new markets, and it plans to do so with a broader portfolio of tools.
‘‘ TestFit is commonly assumed to be based on AI. In fact, it’s mostly not AI at all. But even without that ‘secret sauce’, customers generally achieve two to three times as many design iterations per project, the company boasts ’’
Up until recently, TestFit’s core Site Solver tool was the only product it sold. It’s a desktopbased application that really makes use of the power of local hardware. Everything is geared around speed. It may not work in actual real time, but it works in as close to real time as is computationally possible.
meant that TestFit’s graphics rendering may have been a little simple or flat, but recently, quality and detail have increased considerably, and without impacting performance.
Spacemaker, which operated in the same feasibility space, was acquired by Autodesk in 2020 and re-emerged as the cloud-based Forma in 2023. While TestFit initially got the cold shoulder from Autodesk, it was eventually invited into the tent and began offering a TestFit plug-in at Forma’s launch. This win for TestFit also introduced its team to the limits of cloud-based applications.
Urban planner
Since the introduction of Forma, the market for conceptual and feasibility tools has become a hotbed of development. A lot of that work is free, or at least bundled up in wider software suites, so that’s practically free to subscribers.
To stake its claim to this space and give the market a taste of what its software can achieve, TestFit has decided to create a ‘free’ version called Urban Planner. This is a desktop-based application that can be downloaded from TestFit’s website.
At its heart, TestFit is a massive parametric solving engine. It’s designed to solve for hundreds of preferred site, design, and cost conditions, resulting in a single optimised design option.
Because what it does seems like a
Many of its competitors are web-based and rely on the ‘fire and forget’ principle, where results are returned after some waiting time. TestFit, by contrast, is about getting one solution, as quickly as possible, that accurately reflects all the input conditions. Historically, that has
Urban Planner first brings in site maps or satellite views to start work. Users then create site boundaries and quickly model mixed-use developments with 2D/3D regions in real time. TestFit has included its impressive road creation and editing tool, which is dynamic and generates intersections automatically. Zoning regulations can be defined and applied, and angle setbacks applied to forms.
Software
Texas-based feasibility developer TestFit has been impressing architects and developers since 2016 with its real-time design automation tool. Now, in a shift of focus, it’s returning to design with a free massing tool, writes Martyn Day
TestFit will then tell you if your massing model meets the code criteria specified.
On the evaluation front, Urban Planner includes quantity take-off, deal editing and deal tabulation, which can be saved. There are direct integrations with Revit and Enscape, and files can be exported to DXF (for CAD software), SKP (for SketchUp), glTF (for saving 3D model views), CSV (for spreadsheets) and PDF. It supports saving to cloud-based storage.
What’s missing? The famous parking layout tool is there, but only works in low-detail mode for custom parking stalls, vertical circulation and building core. In terms of building typologies, it will display multi-family, high/low density, core-based buildings, gardens and industrial, but again, at low detail. It doesn’t support unit mixes or custom unit types. For all those features, the full Site Solver is required.
So why would the company develop and give away Urban Planner for free? In response to that question, TestFit CEO Clifton Harness shoots from the hip. Revit’s primary reason for existence, he reckons, is to address documentation. But as far as conceptual design, he doesn’t believe it has been “particularly impactful”.
“Revit never asks, ‘What’s the minimum amount of information that you need to make an apartment building?’,” he points out. “Revit can model everything and do everything, but you don’t really need that when you have ‘commodity architecture’. We are a feasibility company, so we ended up creating a very parametric environment. We wanted to
allow a broader range of customers to benefit from TestFit, to input geometry, roads, site splits, et cetera. If you’re an architect, you want that information to be there, but you don’t want to model it. At this point, we had most of what’s in SketchUp as a massing tool already built, plus all these other really powerful parametric real-time tools.”
Harness says that Snaptrude, Skema and Arcol – along with a long list of other new companies – are currently developing features that he and his business partner built in their mid-twenties, “because we had to build them anyways, to get to where we wanted”. But since these companies are now headed towards TestFit’s space, “we wanted to put out a TestFit product for free that we’re not ashamed of – a free tool, but with better features than all the others,” he explains.
“This is just the first release. We have a whole year of updates planned for it to make it even better.”
Due to its history of focusing on the US market, the TestFit team has been busy adding some more euro-centric basics, like support for the Euro, metric measurements, and so on. As TestFit wins more customers in Europe, its product should be able to accommodate more European use cases and reflect the variables required in European massing and site planning.
Conclusion
The reality is that creating a BIM tool is a marathon. In the early stages of development, massing tools can be useful to show off their relevance and potential to would-be customers.
For TestFit, then, Urban Planner is a flag in the sand, designed to chase off these start-ups – but I really don’t see any of them even contemplating providing the kind of detailed feasibility analysis for which TestFit is known. Few would-be competitors are on the desktop, either. The chances of their cloud applications being able to demonstrate comparable dynamic, real-time performance is for the birds.
Meanwhile, the team at TestFit is now working on generative design. It’s purposely not referring to this as ‘AI’. Instead, it’s a goals-based solver. The user sets the goals and, in real time, TestFit will hit “any target you want”, according to Harness.
In other words, it will provide a solution and populate thousands of other solutions, based on key performance indicators (KPIs) specified by the user. Once they have that single solve, the user can manipulate site constraints and TestFit will resolve continuously based on the changes. It will be fascinating to see how fast TestFit’s real-time AI will be.
■ www.testfit.io
Clifton Harness will be talking about the benefits of automation at AEC Magazine’s NXT BLD and NXT DEV conferences on 25 and 26 June 2024 at London’s Queen Elizabeth II Centre.
I get the distinct impression that Harness has been taken by surprise when it comes to the extent to which a whole group of BIM start-ups are seemingly focused on the industry’s early massing needs.
■ www.nxtbld.com
Autodesk Informed Design
a bridge between BIM and MCAD
Connecting fabrication software to BIM software, that is designed to primarily to create documentation, is not as easy as you might think. Autodesk, with software in both camps, has had many attempts before.
Under a new initiative the company is now connecting Revit and Inventor via the cloud. Martyn Day reports
If you go by the sheer number of offsite housing fabrication bankruptcies, closures and turmoil, the purveyors of MMC (Modern Methods of Construction) have had a pretty shocking past eight years. Given that the UK has a chronic shortage of housing and construction labour, and a record number of building contractors going out of business, it’s astonishing that the one process designed to overcome these constraintsbuilding offsite - has so utterly imploded, burning hundreds of millions of dollars in investment along the way. The amount of capital required always seems to be underestimated and the factory space acquired before the workflows or technologies have been fully worked out.
Reflecting the fractured and fragmented problems that face the industry, BIM and mechanical CAD (MCAD) grew up in different parts of the city. BIM tools like Revit build models to produce drawings, which then get handed onto contractors to be built manually. MCAD creates solid models of parts, which are contained in assemblies, which then generates Gcode to drive various fabrication machines, cutting metal or wood, or for output to 3D printers.
Both types of tools were developed independently of each other and paid little heed to the file formats or user-bases of each discipline. That is until manufacturers of fabricated components in buildings, such as MEP, wanted to be driven by 3D models, and offsite housing fabricators wanted to build from data created in BIM tools.
While BIM software is 3D, it does not have the same level of detail as fabrication-level assembly data in MCAD and herein lies the chasm to cross.
To get data out of Revit and into Inventor, Autodesk’s leading MCAD tool, Autodesk initially added SAT file capabilities to Revit. (N.B. SAT is a 3D model format used by ACIS-based MCAD modelling software). Then RFA (Revit assemblies) were added to Inventor as an export.
However, the workflow to use and create RFA was a six/seven stage process: open > simplify geometry > use specific tools to create geometry with special connectors > position the UCS > add the metadata and omniclass tags > export and import into Revit.
These file-based transactions were an inhibitor. Then Autodesk invested in doing the reverse and in Revit 2021 data could be exported as a reference to Inventor using the AnyCAD technology.
In the 2022 release, Inventor got RVT export and, in 2023, the concept of Data Exchanges meant portions of the files could be shared, but not the whole RVT project. Exhausted yet?
One can’t say Autodesk didn’t put the effort into trying to tie Revit and Inventor together and it seemed to be gaining traction in the small but growing industrialised construction sector. However, since the 2023 releases there seems to have been a pause on this development and a rethink on the strategy of doing industrialised construction.
A new approach
In February 2024, Autodesk announced ‘Informed Design’, which is a new take on connecting Revit with Inventor. This is more of a combination of workflow strategy and technology. At the heart, Revit and Inventor connect via the cloud but, prior to starting a project, work is required in
creating pre-defined building products in each application, giving the impression of a bi-directional digital thread.
The upfront work requires the users to componentise the manufacturable elements of a building and create those in Inventor, producing a library or family of customisable parts or assemblies using AEC-specific templates.
These are then loaded to Autodesk Construction Cloud (ACC), and then made available as Revit families. These Inventor AEC families allow parametric editing, materials, and build corresponding BoMs (bills of material).
With this active substitution connection, Revit designers build not only the architectural model but simultaneously create a model of fabricable 1:1 components in Inventor.
While the chasm remains between Revit and Inventor, Autodesk has introduced a kind of fabrication modelling by proxy. As the predefined Inventor parts and assemblies include the constraints of what can be manufactured, the architect is forced to design within the boundaries of fabrication reality. This is an intriguing solution to a historical problem.
The downside of this approach is that there is significant work to be done upfront in the fabrication definition of every fabricable component or system. Pre-work is required in Inventor, which is arse about face to the way the process works today.
For very organised multi-discipline firms I can see this burden paying dividends, especially as the library of components defined in Inventor grows. However, in the typically fragmented AEC space, the level of pre-coordination and investment would certainly be a challenge for many.
44 www.AECmag.com March / April 2024
Conclusion
Autodesk and the industry are on a long and painful industrialised construction journey. There is no doubt that there is convergence and there will be increasing pressure to cohabit the same design space and fabricate with the most modern methods.
As it stands the software applications that are used to do this were never intended to work together and so workarounds have to be sought. Lobbing the data over the wall between applications was the first attempt. But trying to apply brute force to data that doesn’t sit comfortably outside of its original environment, and comes with overheads, is essentially a form of design puppetry, operating two systems through proxy substitution. Informed design is a more intellectual approach.
One has to wonder if just designing in Inventor is the way to go. Dale Sinclair, head of digital innovation at WSP has been an advocate of the Inventor route since his days at AECOM, where Inventor allowed him to speak the language of the offsite fabricators and model at 1:1.
Sinclair will be speaking on this strategy at AEC Magazine’s NXT BLD conference on 25 June 2024 at London’s Queen Elizabeth II Centre (www.nxtbld.com).
There are other options routed in MCAD, such as Dassault Systèmes Catia which has been used in high-end projects by ZHA, SHoP and Gehry. But traditional BIM and MCAD tools, even from the same vendor seem to sit together as well as oil and water.
Autodesk’s Informed Design in some way respects that but I don’t think this will be the industry’s final attempt at integrating the two worlds. While its obvious that different disciplines need different tools, if there is a fresh reimagining of CAD, does there still need to be different formats for each? For now, that’s what we are dealing with. The future may see a more unified database with less need for parlour tricks.
The current sorry state of offsite construction is unfortunately not going to help drive this. At some point, someone, with deep pockets, is going to make this work. Labour shortages will not improve and hundreds of thousands of quality houses need to be quickly erected to increasingly sustainable constraints. To hear about that, come to NXT BLD (www. nxtbld.com) to hear from Bruce Bell of Facit Homes (www.facit-homes.com) who’s got a plan to upscale residential on-site fabrication. You can also read more about Bell’s work on page 46.
45 www.AECmag.com March / April 2024
Feature
Scaling-up digital construction
As an industry, offsite modular is pretty much collapsing in the UK. There’s a serious scarcity of skilled labour, and the traditional construction industry racked up more insolvencies last year than any other sector. So what hope does the UK have of digitising fabrication and building the homes we need? Martyn Day speaks with Bruce Bell of Facit Homes
The first time I wrote about Facit Homes was back in 2013, when the company had just been highlighted on Channel 4’s Grand Designs as a design-build architectural firm. What set it apart from other design-build firms I knew about was that the house that featured in the TV programme was designed in BIM using Revit, but was digitally fabricated on site, using a CNC router housed in a shipping container. This CNC device cut out box sections from 2,440mm x 1,220mm timber sheets filled with masses of insulation.
More than ten years on, Facit Homes co-founder Bruce Bell is still designing and fabricating digitally. He has a profitable practice and has so far built housing collectively worth over £30 million, with build times typically two times faster than traditional methods.
At the time of our first conversation, Bell was not convinced that factory-produced houses were viable for reasons of standardisation, repetition, and boredom. The economics didn’t stack up either, he argued, with no one size fitting all when it comes to buildings, plus the costs of transportation, which might be
up to 20% of overall price.
“There is a direct correlation between factory fabrication and repetition, because you can’t have factories sitting idle due to the overheads, and as soon as you have a factory, you need turnover. In order to have turnover, you need standardisation, and you end up producing the same thing over and over again,” he told me.
That said, neither of us could have known at that time that factory fabrication would become such a disaster in both the UK and USA — even when producing the same thing repeatedly.
A family of parts
Today, Facit Homes is still deeply reliant on Revit as its BIM weapon of choice, although it has expanded its repertoire to include generative tools such as Rhino Grasshopper. Bell has created a distinct family of parts and these are key to the fabrication of a chassis design for all homes created by Facit. With his special CNC code linked to Revit, Bell produces G-code at his London office, which is then sent to an on-site CNC router. Box sections are made with machine slots, and these vary from day to day, depending on the weather, for reasons of moisture content.
Since I last visited Facit’s offices, the company has built its own specification database. This has been fleshed out and expanded with every project the company has undertaken and includes compo-
www.AECmag.com
Feature
nents, suppliers, models – everything you need when specifying a building.
The net result of having this linked to Revit is that, by the time any BIM model is complete and fits the client’s specification, Facit Homes knows how much the building will cost to fabricate and to complete fit-and-finish to within 1%. It also has a full bill of materials quantities and a cost breakdown for the project. While the company relies on a chassis-based design, variations in designs show that this is not a limiting factor.
From house to houses
Bell has spoken at AEC Magazine’s NXT BLD event several times on the topic of digital construction, and he has always seemed like one hand clapping - because, unlike everyone else, he’s not been trying to build buildings in factories.
The time has now come for Facit to scale up its operation to offer services to developers – not just in planning sites, but also designing all housing, as well as construction projects.
Bell explains that, in his conversations with developers in the UK, today’s financially precarious construction firms can cost them millions if they collapse midproject. They suffer from labour shortages, which lead to delays and missed project timelines. So, for Facit Homes to scale up, it’s not so much a case of a bandwidth problem in design, because Revit combined with generative provides signifi-
cant productivity benefits. Instead, the bottleneck for Facit lies in the on-site fabrication method, because one shipping container system may be ample for building a single residence, but when tens or scores of houses are planned, a different approach is necessary.
After a few years of mulling over the problem, Bell has engaged a fantastic robotics fabrication firm in the UK called Tharsus to devise a new, shippable digital fabrication unit. For those not in the know, Tharsus has designed and built pick-and-place robots for Ocado’s grocery delivery warehouses.
Bell originally envisaged four shipping containers’ worth of machinery in each unit. With the help of the team at Tharsus, however, this looks to have shrunk down during the design process to two containers.
As the design is nearly finalised, all that remains now is to put the first unit to work on its first contract. Without giving too much away, the first thing the new machine will fabricate is the temporary wooden structure that will keep the site’s assembly area dry. Set-up time is estimated at one week.
Its cutter can produce a much higher throughput of CNC cut sheets than the previous tool, and it also has a built-in printer, which prints the actual drawing on the sheet together with a unique QR code. Once complete, the sheets are autostacked for assembly.
The designated assembly area, meanwhile, uses video recognition and mixed reality technologies to assist the team in measuring and assembling sections before they are installed on the chassis.
Bell reckons that each machine is capable of building some £20 million worth of housing each year, or approximately 80 homes. The idea is obviously to have a number of these robots fabricated, so that Facit Homes can tackle bigger projects and win bigger customers, both nationally and internationally.
Bruce Bell is a mix of technologist, architect, and canny-but-cautious entrepreneur. The expansion and development of Facit Homes comes only after years of Bell analysing the market and figuring out how Facit might best address the opportunity without running the risk of a spectacular crash-andburn scenario. The company’s biggest overhead has been the design and procurement of the new high-throughput fabrication systems. Bell will now be hoping to reap the rewards of that move.
■ www.facit-homes.com
Bell will be speaking about this exciting project at AEC Magazine’s NXT BLD conference on 25 June (www.nxtbld.com). By then, he will hopefully be able to show off prototypes for the fabrication machine and go into more depth on how digital fabrication of homes can be achieved without owning a factory and in such a way that, in the lean times, the costs don’t kill contractors.
47 www.AECmag.com March / April 2024
CPUs with integrated GPUs: prime time for CAD & BIM Hardware
AMD’s Ryzen Pro laptop processors come with powerful integrated graphics and pro graphics
drivers that are ideal for CAD & BIM. With HP and Lenovo now offering mobile workstations with these chips, it’s great news for architects and engineers
For years it’s been the accepted norm that if you use professional 3D CAD or BIM software, you need a workstation with a separate CPU (Central Processing Unit) and GPU (Graphics Processing Unit).
Discrete / dedicated pro GPUs (e.g. AMD Radeon Pro and Nvidia Quadro / RTX) not only provide that all-important graphics horsepower but are backed up by pro graphics drivers that are optimised and certified for hundreds of professional apps.
Both AMD and Nvidia have years of experience ironing out those niggly little issues that can arise in CAD / BIM tools, particularly those that rely on the OpenGL graphics API. They also help software developers add new features that bring performance and fidelity to the CAD viewport. This includes Order Independent Transparency (OIT), which renders transparent objects more quickly and accurately, and Vertex Buffer Objects (VBOs), which pushes more of the graphics processing onto the GPU to reduce the CPU bottleneck.
The processors have almost exclusively been manufactured by Intel and have only really provided enough graphics horsepower to smoothly navigate relatively small 3D models.
The integrated GPUs have also had a limited feature set, and Intel has not demonstrated anywhere near the same level of commitment to pro driver optimisation and software certification as AMD and Nvidia has.
That’s all in the past though. New generation processors from both Intel and AMD are very different and the integrated GPU should no longer be considered a
In summer 2023, the chip family was made available in mobile workstations from two of the major OEMs: the HP ZBook Firefly G10 A and Lenovo ThinkPad P14s. At time of writing, HP has just announced the HP ZBook Firefly G11 A, with the slightly improved AMD Ryzen Pro 8000 Series processor.
Intel has also made big strides in pro laptop processors with integrated GPUs. Much of this is down to its decision to take pro graphics seriously by launching a family of discrete Intel Arc Pro GPUs in 2022. The knowledge it has gained in graphics hardware and pro graphics drivers, is now starting to trickle through to its Intel Core Ultra laptop processors with integrated Intel Arc GPU. Mobile workstations with these chips have just been announced and should start to ship soon.
Why should you care?
There are a few reasons why a single processor can be better than two.
The first is price. Buying one processor is cheaper, so this means the mobile workstation costs less. That’s the theory, at least, but it’s not always the case. In the US, for example, the HP ZBook Firefly G10 A (AMD version) is currently considerably cheaper than the HP ZBook Firefly G10 with Intel Core CPU and Nvidia RTX GPU. However, in the UK both machines cost around the same.
The second is size. Two processors and associated cooling need more space. In theory this means that a system purposebuilt for a single processor can be smaller, thinner and lighter. However, there are currently no purpose-built single processor mobile workstations. Both the HP ZBook Firefly G10 A and Lenovo ThinkPad P14s (AMD) have Intel / Nvidia variants that use the same chassis, although the AMD versions are slightly lighter.
A few years ago, it was unthinkable that a mobile workstation featuring a processor with integrated graphics could be powerful enough for demanding 3D CAD workflows
second-class citizen for 3D CAD & BIM.
Historically, CPUs with integrated GPUs — where both processors are built into the same silicon — have not been considered a serious solution for 3D CAD or BIM — at least by this magazine.
AMD is leading the charge. Last year the company introduced the AMD Ryzen Pro 7000 Series, a new family of laptop processors with integrated GPUs that can realistically compete with entry-level discrete GPUs.
What’s more, it has backed this up with the same pro graphics driver that AMD uses for its discrete AMD Radeon Pro GPUs.
The third is memory, although this can be both a positive and a negative.
First let’s explain how GPU memory works in systems with and without discrete GPUs.
Integrated GPUs use the same system memory as the CPU, although a certain amount is ring fenced for the GPU in the laptop’s BIOS. Discrete GPUs, on the other hand, have their own dedicated on-board memory.
The benefit comes when you run out of dedicated GPU memory — perhaps when working with a very large CAD model.
48 www.AECmag.com March / April 2024
‘‘
’’
b y greg corke
With an integrated GPU, it’s not usually a big problem. The GPU can easily borrow more from system memory if it’s available, and temporarily allocates it as shared memory. (N.B. This is something AMD has refined over years of developing similar integrated processors in the Xbox and PlayStation gaming consoles). Because the memory is in the same physical location, it can be accessed very quickly, so real time performance in a 3D CAD tool will only drop by a few frames per second.
When a discrete GPU runs out of memory, it can have a big impact on 3D performance. Frame rates can drop dramatically, often making it very hard to re-position a 3D model in the viewport. While a discrete GPU can also ‘borrow’ some memory from system memory, it must access it over the PCIe bus, which is much slower.
The downside of integrated graphics in terms of memory is that it uses up some of your system memory. However, system memory can always be upgraded (GPU memory can’t), and some 3D CAD software vendors are also exploring ways to make their applications more memory efficient when running on a laptop with an integrated GPU. Traditionally, the exact same graphics data needs to be stored in two places, on both the CPU and GPU, but with an integrated GPU you could have a single copy meaning less memory is used.
The fourth is multi-core processing. Taking the HP ZBook Firefly G10 as an
example, the AMD edition comes with a choice of 35-watt AMD Ryzen Pro processors with up to 8 CPU cores and 16 threads. However, when configured with an Nvidia RTX A500 GPU, the Intel edition is limited to a choice of 15-watt Intel Core ‘U Series’ processors with 2 Performance Cores, 8 Efficient Cores, and 12 threads.
On paper this means the AMD edition should perform better in highly multithreaded workflows such as ray trace rendering. However, there could also be an advantage in lightly threaded workflows. Even though CAD and BIM software is largely single threaded, other processes also need access to fast cores, including Windows, the graphics driver and storage, so there could be some contention.
It’s only when the HP ZBook Firefly Intel edition is configured without a discrete Nvidia RTX A500 GPU that you can get the more powerful 28-watt Intel Core ‘P Series’ processor with 6 Performance Cores, 8 Efficient Cores, and 20 threads.
The right level of performance
At this magazine, we often get asked which GPU is best for CAD? To answer this question, it is important to understand three things.
First, that the 3D graphics demands of CAD and BIM software are relatively small, compared to real time visualisation tools like Enscape and Twinmotion.
Second, that 3D performance in CAD
and BIM software is often bottlenecked by the frequency of the CPU, so a massively powerful GPU might not increase performance by much, or even at all.
Third, that it is essential not to simply consider benchmark scores as numbers on charts, where bigger is always better. Instead of being seduced by how much faster one GPU is compared to another, you should be asking how much performance do you really need in order to work comfortably with the 3D CAD models you create? Spinning a huge CAD assembly at 500 frames per second (FPS) benefits no one. Most users will be able to reposition their models quickly and accurately at around 20 FPS.
Testing in the real world
To put all of this into practice, we tested a variety of 3D CAD assemblies from the popular mechanical CAD (MCAD) tool Solidworks 2024, using the HP ZBook Firefly G10 A as a test machine.
The mobile workstation was kitted out with a top-end AMD Ryzen Pro 7940HS processor with integrated Radeon 780M graphics and Radeon Pro graphics driver, along with 32 GB of DDR5 memory.
The HP system has a setting in the BIOS that allows the user to allocate a certain amount of system memory specifically for graphics or ‘video’. You can choose between 256 MB, 512 MB or 4 GB. HP refers to the 4 GB setting as ‘gaming opti-
49 www.AECmag.com March / April 2024
AMD Ryzen Pro: two processors, one piece of silicon
Hardware
mised’, somewhat obscurely for a pro laptop in our opinion.
By default, video memory is set to 512 MB, but this is really meant for users of ‘office productivity’ apps. With CAD models typically needing gigabytes, rather than megabytes of GPU memory, we did our initial testing with the 4 GB ‘gaming optimised’ profile.
First up was a 295-component computer assembly which used 1.9 GB of dedicated memory, 0.5 GB of shared memory and 2.4 GB of total memory. At Full HD (1,920 x 1,080) resolution, we got a whopping 190 frames per Second (FPS) in shaded with edges mode and 103 FPS with RealView enabled, an enhanced viewport render mode that adds realistic materials, environment reflections and floor shadows. Performance dropped to 107 FPS and 63 FPS respectively when plugging in an external 4K monitor, but still gave a very fluid and responsive viewport.
Next up was a complex 2,000 component motorbike assembly, which upped the memory demands to 3.9 GB (dedicated), 1.9 GB (shared) and 5.8 GB (total). Even though GPU memo ry usage went over the 4 GB that was allocated, the laptop still performed well.
In shaded with edges mode we got 31 FPS at FHD and 27 FPS at 4K, although this went down to 16 FPS at FHD and 14 FPS at 4K with RealView enabled. While not silky smooth, it still gave us an adequate modelling experience.
Even with a colossal 8,000+ component MaunaKea Spectroscopic Explorer telescope model (3.8 GB dedicated, 3.0 GB shared, and 6.8 GB total), the laptop managed a workable 10 FPS in shaded with edges mode at FHD.
including processor, memory, storage, display, WiFi, etc.
In general, we found the laptop was very cool in operation and didn’t throttle even under heavy load. We ran the Solidworks SPECapc benchmark on repeat for several hours, which stresses both CPU and GPU, and while the fan noise was noticeable, it wasn’t annoyingly loud.
Of course, it’s essential that the machine has good ventilation, so it can expel warm air out of the large vent on the underside. In short, do not rest it on your lap unless doing simple office tasks.
Power share
The HP ZBook Firefly G10 A dynamically allocates power to the AMD Ryzen Pro processor, depending on the task at hand. When running the Solidworks SPECapc graphics benchmark for exam-
and GPUs must also play within fixed power constraints. For the Intel-based HP ZBook Firefly G10 it’s also 65W.
If you frequently multi-task, and 3D performance (in particular) becomes inadequate, then it’s probably worth considering a mobile workstation with a more powerful discrete GPU.
Conclusion
A few years ago, it was unthinkable that a mobile workstation with a processor with integrated graphics could be powerful enough for demanding 3D CAD workflows. But with its latest generation of Ryzen Pro processors and pro graphics drivers, that’s exactly what AMD has enabled. The HP ZBook Firefly G10 A looks be to a great option for CAD on the go, as does the Lenovo ThinkPad P14s (on paper, at least).
ple, power usage on the GPU peaks at 50 watts. With the Cinebench 2024 CPU rendering benchmark, the CPU peaks at 51 watts.
Out of interest, we repeated the tests with the 512 GB video memory profile. On average, frame rates only dropped by around 17-20%. This re-confirms the point made earlier that when an integrated GPU runs out of dedicated memory, it’s not a major problem. At least, that’s the case with AMD processors. We have not tested Intel Core Ultra.
Cool operator
The HP ZBook Firefly G10 A comes with a 65W power supply, so only draws a maximum of 65W at any one time, which is shared across the entire system,
With a total power budget of 65 watts, it doesn’t take a genius to work out that when both processors are being hammered at the same time, they must each consume less power. As neither can run to their full potential, this results in a drop in performance.
For example, when rendering in Cinebench and running the GPU-intensive Solidworks SPECapc benchmark, graphics performance fell by around 50% and CPU performance by about 13%.
Of course, this behaviour isn’t exclusive to processors with integrated GPUs. Mobile workstations with separate CPUs
Of course, CAD software is also changing and in the coming years expect to have 3D viewports that are much more graphics hungry. But while real time ray tracing at the click of a button might be an attractive proposition, it will require a significantly more powerful GPU than is currently available in an integrated processor.
Perhaps the most interesting thing about integrated graphics is what the future holds. Today, mobile workstations with integrated GPUs share the same chassis as their Intel / Nvidia counterparts, but if HP, Lenovo or indeed Dell, deem enough demand to be out there, could this lead to a new breed of mobile workstation with chassis that are smaller, thinner, and lighter? Might this type of processor also give rise to a new generation of micro fixed workstations, which not only take up less desk space, but offer incredible density in the datacentre?
It’s also exciting to imagine where this technology might go. Currently, the balance between performance and memory is about perfect for 3D CAD, but the processors are not powerful enough for realtime visualisation, especially for the typically large models that are produced.
But as new generation processors are developed with even better integrated graphics, the ability to fall back on shared memory to address much larger datasets, could make the technology attractive to a much wider audience.
50 www.AECmag.com March / April 2024
The HP ZBook Firefly G10 A features the AMD Ryzen Pro 7000 Series processor with integrated Radeon GPU (image credit: Snaptrude)
Join our online community facebook.com/AECmag linkedin.com/company/aec-magazine twitter.com/AECmagazine Subscribe free Technology for BIM and beyond Building Information Modelling (BIM) for Architecture, Engineering and Construction register. aecmag.com
Unlock ray-tracing in every size workstation Now featuring NVIDIA RTX 2000 Ada From From From From NVIDIA RTX 2000 Ada NVIDIA RTX 4000 Ada NVIDIA RTX 4500 Ada NVIDIA RTX 6000 Ada £ 1,099.99 EX VAT £ 2,399.99 EX VAT £ 3,599.99 EX VAT £ 8,199.99 EX VAT Get in touch at prographics@scan.co.uk scan.co.uk/prographics · 01204 47 47 47