AEC September / October 2024

Page 1


Building Information Modelling (BIM) technology for Architecture, Engineering and Construction

Geo whiz

Bentley Systems acquires Cesium to deliver open digital twins in context

Darwinism in AEC

Adapt and survive with expert systems

The challenges of implementing GenAI AI is hard (to do well)

Snaptrude advances

BIM 2.0 tool evolving at pace

The Future of Software is Open

At Bentley, we believe that data and AI are powerful tools that can transform infrastructure design, construction, and operations. Software must be open and interoperable so data, processes, and ideas can flow freely across your ecosystem and the infrastructure lifecycle. That’s why we support open standards and an open platform for infrastructure digital twins.

Leverage your data to its fullest potential. Learn more at bentley.com.

editorial

MANAGING EDITOR

GREG CORKE greg@x3dmedia.com

CONSULTING EDITOR

MARTYN DAY martyn@x3dmedia.com

CONSULTING EDITOR

STEPHEN HOLMES stephen@x3dmedia.com

advertising

GROUP MEDIA DIRECTOR

TONY BAKSH tony@x3dmedia.com

ADVERTISING MANAGER

STEVE KING steve@x3dmedia.com

U.S. SALES & MARKETING DIRECTOR

DENISE GREAVES denise@x3dmedia.com

subscriptions MANAGER

ALAN CLEVELAND alan@x3dmedia.com

accounts

CHARLOTTE TAIBI charlotte@x3dmedia.com

FINANCIAL CONTROLLER

SAMANTHA TODESCATO-RUTLAND sam@chalfen.com

AEC Magazine is available FREE to qualifying individuals. To ensure you receive your regular copy please register online at www.aecmag.com about

AEC Magazine is published bi-monthly by X3DMedia Ltd 19 Leyden Street London, E1 7LE UK

T. +44 (0)20 3355 7310

F. +44 (0)20 3355 7319

© 2024 X3DMedia Ltd

Industry news 6

Trimble launches Reality Capture platform, Laiout enhances automated floor planning tool, Resolve brings 2D construction data into VR, plus lots more

AI in AEC news 12

Howie unveils Copilot for architects, Zenerate launches design automation tool, Finch enhances AI design software, CrXaI links image generator to BIM, plus more

Graphisoft in the era of Artificial Intelligence 14

How does the developer of Archicad plan to put AI to work on behalf of its customers?

Geo whiz: Bentley acquires Cesium 16

We explore the surprise deal that promises to bring the worlds of digital twins and geospatial closer together

AI is hard (to do well) 20

Register your details to ensure you get a regular copy register.aecmag.com

Generative AI (GenAI) is extremely promising, but achieving tangible results is more complex than the hype suggests

Darwinism in AEC technology 28

To adapt and survive, the AEC industry should be focusing on knowledge-based expert design systems

Graphisoft strategy 32

In the shift from BIM to BIM 2.0, big changes are underway at Graphisoft

Snaptrude advances 34

The cloud-based BIM 2.0 software fleshes out its features in pursuit of victory over the current desktop BIM tools

Vectorworks futures 38

CEO Biplab Sarkar talks new features, moving from file to cloud databases, autodrawings AR, openness in BIM and AI

Pricing,

licensing

and business models 40

The rapid evolution in the way AEC software companies charge for licences and shepherd their users to boost revenue

The investment issue 44

With Autodesk dealing with an activist investor problem what could be the knock-on effect for customers?

Content Catalog 47

Autodesk has integrated Unifi’s solution for managing and accessing design content into its cloud stack

How to choose a remoting protocol 49

Advice for delivering performative remote workstation deployments

Bentley Systems acquires 3D geospatial specialist and open data vanguard Cesium

Bentley Systems has acquired Cesium, the developer of an open platform for creating 3D geospatial applications.

Cesium develops 3D Tiles, an open standard for streaming and rendering large-scale 3D geospatial data and Cesium ion, a cloud-based platform for hosting, processing, and streaming 3D geospatial data.

The acquisition will help extend the reach of Bentley’s iTwin Platform, which

is used by engineering and construction firms and owner-operators to design, build, and operate infrastructure.

According to Bentley, the combination of Cesium plus iTwin, enables developers to seamlessly align 3D geospatial data with engineering, subsurface, IoT, reality, and enterprise data to create digital twins that scale from vast infrastructure networks to the millimetre-accurate details of individual assets.

A few days prior to the acquisition,

Cesium launched its AECO Tech Preview Program, with the aim of improving workflow and capabilities that place architecture, engineering, construction, and operations (AECO) content in a 3D geospatial context. The company made two new technologies – Design Tiler and Revit Add-In – available for early access, to transform IFC and Revit files into 3D Tiles.

Meanwhile, turn to page 16 for more about the Cesium acquisition.

■ www.bentley.com ■ www.cesium.com

Intel refreshes Sapphire Rapids Xeon workstation processors

Intel has ‘refreshed’ its Sapphire Rapids workstation processors with the launch of the Intel Xeon W-2500 and Intel Xeon W-3500 Series.

While these new chips share much of the same architecture as the 2023 models—the Intel Xeon W-2400 and W-3400—they offer a few key enhancements. These include more cores at each price point, a slight increase in cache, and a modest boost in base frequency, though turbo frequencies remain unchanged.

At the top end, the new Intel Xeon w7-2595X boasts 26 cores, two more than its predecessor, the w7-2495X. Similarly,

the Intel Xeon w9-3595X features 60 cores, a four-core increase over the w9-3495X from 2023..

With these additional cores comes an increase in base power. The new processors are rated between 20 W and 35 W higher than their predecessors, with the Intel Xeon w9-3595X reaching 385 W base with a max turbo power of 462 W.

However, according to Intel, despite this rise, its OEM partners – Dell, HP, Lenovo and others – have not needed to make any changes to the thermal management of their workstations and that the new chips will be ‘drop in compatible’ with existing Sapphire Rapids workstations.

■ www.intel.com

Photorealistic 3D Tiles of Philadelphia from Google Maps Platform

Trimble Reality Capture platform service launches

Trimble has launched a new Reality Capture platform service designed to enable more effective collaboration and secure sharing of massive reality capture datasets captured with 3D laser scanning, mobile mapping and uncrewed aerial vehicle (UAV) systems.

One of the key aims is to make reality capture data accessible to more project stakeholders to support better informed decisions.

The Trimble Reality Capture platform service is available as an extension to Trimble Connect, the cloud-based common data environment (CDE) and collaboration platform.

The service handles point clouds and 360-degree imagery and works with terrestrial laser scanners like the Trimble MX series and Trimble X9, as well as data from third-party hardware.

The Trimble Reality Capture platform

service is integrated with Microsoft Azure Data Lake Storage and Azure Synapse Analytics with a view to reducing the time it takes to ingest, store and process massive datasets.

“The new Trimble Reality Capture platform service enables our workforce to more easily access data and collaborate between the jobsite and office, creating additional efficiencies across our operations. Having a single place for designers, engineers and other stakeholders to review and inspect project data is a real leap forward,” said Christopher Pynn, digital leader at Laing O’Rourke for Eastern Freeway – Burke to Tram Alliance.

“This new service applies cloud technology in a new way for large data packages, allowing users to significantly scale performance and maximise data value,” said Boris Skopljak, vice president, geospatial at Trimble.

■ www.geospatial.trimble.com

Automated floor planning tool enhanced

Laiout has added several new features to its ‘fully automated’ floor planning solution, which uses proprietary algorithms and generative design to produce hundreds of different and regulation-compliant floor plan designs for office spaces.

The latest release includes over 100+ high-detail furniture blocks and typologies, AI-powered rendering and integrated engineering tools.

For AI-powered rendering, select

customers will be given access to the new feature, which generates text-to-render, high-quality images of generated floor plans ‘in seconds’.

For integrated engineering, new tools include image overlay capabilities that allow users to check ceiling designs and engineering elements seamlessly while generating new test fits. According to the company, users have full control over every aspect of their design.

■ www.laiout.co

Sterling boosts carbon estimating

Sterling, a specialist in cost and carbon estimating solutions for the engineering and construction industry, has formed a strategic integration with Building Transparency’s Embodied Carbon in Construction Calculator (EC3).

The integration enables Sterling DCS users who have access to EC3 to gather and assign A1-A5 carbon data to their construction estimate resources, while Sterling’s platform manages the entire carbon life cycle, including B1-B6 and C1-C4 (see below for definitions).

According to the company, this integration allows estimators and construction companies to accurately estimate both costs and carbon emissions across the full project life cycle.

Meanwhile, Sterling has developed an integration with Autodesk Construction Cloud, which allows project managers to import 3D models and associated 2D drawings, from Autodesk Build, Autodesk Docs, or BIM 360 for quantity take-off during the cost planning and estimation process.

■ www.sterling-dcs.com

B360 Secure gives control to ACC admins

B360 Secure is a new web-based application designed to help construction project managers manage complex permission structures within Autodesk Construction Cloud (ACC) and Autodesk BIM 360.

A centralised dashboard consolidates project-specific member and folder permission data for visibility and control.

■ https://b360secure.dgtra.app

Resolve brings 2D construction data into VR

Resolve has added an ‘App Portal’ to its collaborative immersive design / review software that allows users to view 2D construction project data from inside VR.

With the new feature users can ‘seamlessly’ access and interact with 2D drawings, dashboards, and issue databases without having to remove their Meta Quest VR headset.

According to Resolve, this empowers construction teams to identify and resolve potential issues more effectively by providing immediate access to critical information from 2D data sources. For enhanced collaboration, users can share and pin 2D screenshots within the virtual model for team members to reference.

The App Portal is launching with four key integrations: Autodesk Construction Cloud (ACC), Autodesk BIM 360,

Procore, and Newforma Konekt.

“At Resolve, we are committed to providing innovative solutions that improve the construction process and we also believe 2D data still holds an important place in the industry. This new feature allows our users to fully leverage all their project data, leading to better decision-making and increased project efficiency,” said Angel Say, CEO Resolve.

“The enhanced integration of Procore into Resolve’s application empowers project teams to leverage Procore data like never before,” said Dave McCool, director of product, Procore.

“This integration supports project coordination and stakeholder engagement with critical project data in a whole new way, within Resolve’s virtual experience.

■ www.resolveBIM.com

Lumion expands animated humans

Lumion 2024.2, the latest release of BIM-focused architectural visualisation software, includes 103 new animated characters, more control over ray traced shadows, and clearer material thumbnail previews. The new collection of animated characters feature lifelike movementswalking, presenting, making phone calls, taking photos, playing sports, and more. Models are suited to a variety of settings, including hospitality, healthcare, childcare, and business.

■ www.lumion.com

WZMH Architects forms

Giraffe

ZMH Architects has launched Giraffe, an independent software company aimed at enhancing efficiency, sustainability, and collaboration in building design and construction.

This launch builds on the foundation laid by sparkbird, the firm’s R&D lab established in 2017 to drive innovation in IoT (Internet of Things), design efficiency, modularity, and sustainability.

Giraffe takes this one step further by blending practical architectural and construction expertise with advanced AI and digital twin technology. It tackles critical challenges in the AEC industry, including fragmented design processes, inconsistent standards and documentation, workforce shortages, and the need for greater automation.

■ www.giraffe.software

iPhone LiDAR tool gets 2D plan export

Metaroom by Amrax, which allows users to scan building interiors with an Apple iPhone Pro or iPad Pro, can now generate 2D floor plans ‘in minutes’.

Plans can be exported in 2D DXF or PDF format, in addition to its existing 3D export functionality, which includes 3D DXF, IFC, USD (beta) and others.

Metaroom features an ‘automated reconstruction pipeline’ where LiDAR scans from the iPhone Pro or iPad Pro are uploaded to the cloud, and ‘true-to-scale’ 3D models are generated within seconds. Users can then use the web application, Metaroom, to enrich these 3D models with more information.

■ www.amrax.ai

NavVis targets small spaces with MLX laser scanner ROUND UP

O&M acquisition

Glider, a specialist in asset lifecycle information management, has acquired EDocuments, a provider of digital operation and maintenance (O&M) manuals for the construction industry. Glider plans to integrate the EDocuments platform into its gliderbim platform

■ www.glidertech.com

PIM in Sinc

bimstore has launched Sinc, a new Product Information Management (PIM) platform developed specifically for manufacturers to handle product data on its journey through the organisation. It covers three main pillars: Centralise, Optimise, and Comply ■ www.sinc.io

Vectorworks viz

D5 Render has released a Live Sync plug-in for Vectorworks, so when changes are made to the CAD/BIM model they are immediately reflected in D5 Render’s visualisation environment. D5 Render also integrates with Revit, SketchUp, Rhino, Archicad and other tools

■ www.d5render.com

Preoptima Concept

Preoptima Concept, a tool for wholelife carbon assessments (WLCAs) and carbon optioneering for early stage design, now features RICScompliant reporting and new retrofit capabilities so existing buildings can be assessed alongside new ones

■ www.preoptima.com

Construction layout

Leica Geosystems, part of Hexagon, has launched Leica iCON trades, a new digital layout solution which features 6-degrees-of-freedom (6DoF) technology, previously exclusive to industrial measuring, along with an AI-enabled workflow ■ www.leica-geosystems.com

MEP estimation

Trimble has launched Estimation MEP, a new estimating solution designed to streamline the takeoff and estimating process for smaller M&E consultants in the UK. The integrated solution combines graphical takeoff, material pricing and labour and supplier pricing ■ https://mep.trimble.com

avVis has launched the NavVis MLX, a dynamic handheld laser scanner designed for users of all skill levels, for confined or smaller spaces or shorter, more frequent scanning on site.

The NavVis MLX measures 60 × 19 × 15 cm, weighs 3.6 kg, and is equipped with an ‘easy grip’ handle and a supportive harness for enhanced comfort during use. It is built to work hand-in-hand with the NavVis VLX wearable dynamic scanning system in the field and NavVis IVION point cloud processing software in the office.

The device captures 3D data with a 32-layer lidar sensor in combination with SLAM software augmented by Visual

Odometry (VO) to deliver what NavVis describes as exceptional point cloud quality for a handheld device.

Four cameras positioned on top of the system take high-resolution 270º images when resting on the harness and 360º images when lifted above the head.

The device can capture 640,000 points per second with an operational range of up to 50m. According to NavVis, the accuracy of the point cloud is 5mm in a dedicated test environment of 500m².

To help ensure complete coverage in real time, users can monitor scanning progress on the built-in 5.5-inch 1,920 × 1,080 resolution touchscreen.

■ www.navvis.com/mlx

Enscape Impact launches in beta

Chaos has introduced Enscape Impact (beta), a real-time energy modelling add-on for Enscape 4.1, the latest release of its BIM-focused visualisation software. Developed in partnership with IES, Enscape Impact is designed to make building performance analysis more accessible for architects and designers, by simplifying the analysis process.

With Enscape Impact, architects and designers can quickly view the energy performance of their building and optimise it within their usual design workflow. The software is integrated with Archicad, Revit, Vectorworks, SketchUp and Rhino, but only on Microsoft Windows, not on Mac.

Enscape Impact is not intended to provide building performance certification directly. Instead, it aims to offer real-time feedback that brings architects ‘close to certification-level

accuracy’. According to Chaos, what used to take several hours or even days can now be done in minutes.

The software allows users to calculate and benchmark key performance metrics, such as peak loads and total carbon emission. The data is presented within Enscape’s real-time visualisation environment, making it easier to comprehend and communicate the impact of design decisions.

There’s also a dials panel with ‘easy-toread’ charts and diagrams that display how geometry adjustments impact building performance

According to Chaos, the unified energy analysis and visualisation workflow is not only important for architects, but also for engineers, significantly cutting down the cost and time required to bring a poorly performing project back into meeting sustainability goals.

■ www.enscape3d.com/impact

AI NEWS BRIEFS

AI learning

Civils.ai has launched a new online training course, the ‘Construction AI Specialist’, which explains the fundamentals of how AI applications work and how users can build their own Construction AI tools from scratch, with no-code ‘AI Agent’ tools and with Python scripts

■ www.civils.ai

Safety risk analytics

Highwire has added new AI-powered safety risk analytics features into its platform for capital project construction and operations. The new capabilities are said to provide ‘deep insights’ into contractor risk through advanced AI analysis of safety documentation

■ www.highwire.com

Automatic tagging

Loci is developing a machine learning API that automatically tags 3D assets, making it easier to search, manage, and use 3D content. The software is applicable to many sectors but in AEC it can be used automatically categorise 3D assets with BIM IFC standards

■ www.loci.ai

Map assets with AI

Simerse is developing an AI platform to view and manage infrastructure.

The platform uses AI to process 360° images captured by a low cost vehiclemounted camera and maps asset inventory for field assets. For collaboration, imagery, asset maps, and data records can be shared ■ www.simerse.com

AI from Giraffe

Giraffe, the independent software company spun out of WZMH Architects, is developing two AI-powered tools: AiM (Ai Massing), for the rapid generation of massing models; and PLAiNNED for generating building code-compliant layouts for complex building components

■ www.giraffe.software

AI CONTENT HUB

For the latest news, features, interviews and opinions relating to Artificial Intelligence (AI) in AEC, check out our new AI hub

■ www.aecmag.com/ai

Customisable AI Copilot for architects launches in beta

Howie, a platform billed as an ‘AI copilot for architects’, has launched in beta.

The AI platform uses Large Language Models (LLMs) to make sense of and recognise patterns within large sets of unstructured data, often buried in reports, emails, images, and drawings.

Howie analyses extensive archives of each individual customer’s past and present projects – to create an ‘easily searchable Knowledge Hub.’

Each AEC firm decides the keywords, and Howie then finds, auto-categorises,

and maintains the latest versions.

“Knowledge management is an evergreen pain point—whether it’s an old school search for a document, dealing with outdated information, trying to read and decode complex construction drawings or losing key expertise when people leave,” says Ewa Lenart, co-founder and CEO, Howie.

“We’ve developed an AI Copilot that seamlessly integrates with all your data. It knows, everything your company knows. And never retires. Fully secure.”

■ www.howie.systems

AI-powered design automation tool generates building and site plan options

Zenerate has launched Zenerate App, an AI-powered design automation software designed to quickly generate and identify optimal design schemes, and eliminate the need to redraw floor plans to fit a unit mix.

With a few inputs, architects, developers and brokers can use the software’s AI engine to generate various building and site plan options in real time that meet specific project objectives.

Users can optimise designs by maximising floor area ratio or density, or specify a unit mix, as well as

quickly testing diverse massing, building layout and parking options (podium, structure, surface).

Designs can then be edited, by dragging and dropping residential units, stairs, elevators, retail/office space, drive aisles, parking stalls, etc.

The software can also be used to assess financials (net operating income (NOI), construction cost, project cost, yield on cost or residual value). Finally, it can generate PDF reports, export data in Excel, or download floor plans in CAD or Revit format to further develop the design.

■ www.zenerate.ai

Finch enhances control and precision in AI design tool

inch, the architectural design tool, which is designed to generate the optimal building plan based on parameters and constraints set by the architect, has introduced a new AI-powered floor plate generator, ‘Generate Building Plan 2.0’.

Generate Building Plan 2.0 provides more control over the algorithm, as Jesper Wallgren, Finch CEO explains, “Instead of specifying an exact number of stairwells or units per stairwell, you can now explore a range — say, three to five stairwells. The algorithm then finds the optimal solution within that range, giving you more flexibility and a broader search space.

“This approach also extends to unit distribution per stairwell. You can now define a range of units per stairwell that aligns with your design intentions, such as avoiding corridor buildings where they’re not suitable. This level of control ensures

that the algorithm’s output is closely aligned with your vision from the outset.”

Finch has also focused on improving transparency, to help architects understand how the algorithm arrives at a solution, keeping them informed at every step of the process. “If the unit mix you’ve chosen doesn’t fit the building’s storey, the algorithm will now inform you why,” explains Wallgren. “Perhaps the units are too large, or there are too many stairwells. This real-time feedback allows you to adjust your inputs and understand the trade-offs involved, rather than just rolling the dice and hoping for the best.”

Generate Building Plan 2.0 is also more accurate than its predecessor, and changes have been made so the algorithm’s suggestions are not just theoretically possible but practically viable, taking into account project criteria and compliance.

■ www.finch3d.com

CrXaI AI image generator links to BIM

CyberRealityX has launched a Windows desktop version of CrXaI, its AI text-to-image generator designed to produce high quality, realistic architectural images without having to use viz software to define materials, textures or lighting.

The software is intended for early-stage design, allowing users to explore different ideas for interior or exterior scenes using nothing more than a text prompt.

With the new desktop tool, users can integrate CrXaI with any desktop CAD or

BIM software, such as Revit, SketchUp or Archicad. Users simply position their 3D model in the BIM software’s viewport, then from the desktop CrXaI software they can screen grab that view directly and use it the basis for the visualisation.

To help get users started, the software comes with a set of pre-written text prompts, which describe the desired output. These include building type, season or room type. Prompts can be edited to suit, or written from scratch.

■ www.crxai.app

Enscape 4.1 launches with AI Enhancer

nscape 4.1, the latest release of the BIMcentric real-time visualisation tool, includes Chaos AI Enhancer, which is designed to elevate the visuals and export better-looking assets, such as people and vegetation.

Many of Enscape’s people and vegetation assets are produced in-house, keeping a strict budget of polygons that allows users to place multiple assets without experiencing a loss in performance.

Chaos AI Enhancer is designed to elevate the visual quality of these assets using an AI engine that identifies which pixels should be enhanced.

According to Chaos, these AI-optimised, realistic assets are crucial for creating vibrant scenes that help clients understand the design intent faster. Trees, flowers, people and more not only add emotions but are important to highlight perspectives and spatiality, the company stated.

Enscape 4.1 also includes new artistic visual modes that add the ability to create images simulating pencil or watercolour drawings. The new visual styles are available for screenshots, batch rendering, and video exports.

■ www.enscape3d.com

Graphisoft in the AI era

How does Graphisoft plan to put AI to work on behalf of its customers? Martyn Day speaks to product development execs from the firm to hear their views

In the past, individual brands under the Nemetschek umbrella (of which there are 13 in total, including Graphisoft, Allplan, Bluebeam, Vectorworks and Solibri) were largely left to themselves when it came to developing new solutions and exploring new technologies.

That situation has changed over the past five years, with an increasing focus on sharing technologies between brands, combining those brands and developing workflows that intersect them.

In response to growing interest in artificial intelligence (AI), Nemetschek created an AI Innovation Hub in May 2024, with the goal of driving AI initiatives across its entire portfolio and involving partners and customers in that work. The stated intention is not just to accelerate product development, but also to test and explore the deployment of AI tools such as AI Visualizer (in Archicad, Allplan and Vectorworks) 3D drawings (part of Bluebeam Cloud), and the dTwin platform. Nemetshek has presented this

strategy as AI-as-a-service (AIaaS) to customers and partners.

Explaining Graphisoft’s approach to AI, CEO Daniel Csillag tells us, “There will be new services where we charge a subscription rate and not a perpetual license. Of course, inside of Graphisoft, we’re also trying to use more and more AI – for example, in customer service, such as tools that alert us to a certain user behaviour. Since this might give us an indication that the customer is unhappy, we can reach out proactively to them,” he says.

“Currently there’s a lot of brainstorming going on. I encourage the team to not talk about step-by-step improvements, but also to consider a jump forwards. I’m telling my team: be creative and sometimes disruptive.”

With those words probably ringing in their ears, I also spoke with Márton Kiss, vice president for product success, and Sylwester Pawluk, director of product management. Both executives have active roles to play in the shaping of Graphisoft’s core technologies.

The first Graphisoft AI product was AI Visualizer, a rendering tool that uses Stable Diffusion to produce design alternatives from simple 3D concepts. Graphisoft’s original hypothesis was that users were afraid of putting their IP on the cloud, so the company created a desktop, on-device version, which came with limitations.

“From actually talking to users, nobody is afraid of the cloud now, so with Archicad 28 (currently in technology preview), we have a cloud back-end for AI Visualizer,” says Kiss.

“We have also been experimenting with AI Visualizer Live, which is a camera looking at physical models, simple building blocks, rendered by an AI text prompt. We are thinking about how we productise this, as it’s a very practical way of using existing generative AI for concept iteration,” he says.

Catalogues and autodrawings

Elsewhere in this issue of AEC Magazine we discuss how the quality of download-

‘‘

We have been experimenting with AI Visualizer Live, which is a camera looking at physical models, simple building blocks, rendered by an AI text prompt. We are thinking about how we productise this, as it’s a very practical way of using existing generative AI for concept iteration Márton Kiss, vice president for product success, Graphisoft

able manufacturing content can be a hitor-miss affair. It’s very rare for manufacturers such as Velux or Hilti to create full libraries of BIM objects that designers can add to their models.

From conversations with Pawluk, it’s clear that Graphisoft is looking to tackle this problem. The great thing about specification books is that they have a rigid formula and, through optical character recognition, are machine readable. They might include 2D and isometric drawings, too. So Graphisoft is experimenting with applying AI to read these catalogues and conjure up GDL-based 3D BIM objects that contain all the manufacturer’s metadata (N.B. GDL objects are parametric objects used within Archicad).

As Pawluk explained: “With all generative models, what you need is a large amount of data and good-quality data. What we need first is a large number of good-quality GDL components to train the AI. In our development, the models, the speed of creation and complexity is rapidly increasing. We aren’t working

with any specific vendors yet. It’s still in the exploratory stages at the moment.”

On the subject of autodrawings, we knew last year that Graphisoft was exploring this area, as well as its reverse – generating 3D BIM models from 2D drawings. Kiss confirms that building a bridge here is a very important part of the workflow.

Executives at Graphisoft also like the idea of assembly-based modelling, where pre-configured spaces are created, instead of modelling with walls, doors, windows. “If you have someone creating those assemblies for you, or generated in the right way, a BIM model, from assembly, is actually going to be a very well-structured BIM model,” says Pawluk.

“Then later in the design, when producing the drawing, it’s already got a structure to work from. AI would be brilliant then.”

Generic or personal AI

From our discussion, it was clear that Graphisoft is looking at developing AI

capabilities that would benefit all designers, like AI Visualizer.

However, the company also understands the need for individual customers to train AI on their own models and past projects, which are not shared to train the generic AI.

Kiss comments: “This is where you start, from building your own ‘assemblies’ – rectangles with pre-loaded details essentially – and then use them as building blocks for hospitals, schools, whatever, based on what you have created in the past.”

While Kiss and Pawluk could not go into specifics of what such an app would do, they concede that the group has already trialled collaborative working and that this work ended up being presented to the Nemetschek board.

“The whole concept around this was to make sure that we can deliver those proof of concepts and productise initiatives very quickly. I think that has been proven on this occasion,” says Pawluk.

■ www.graphisoft.com

Connecting digital twins and geospatial

In mid-September, Bentley Systems announced it had acquired Cesium, an industry cherished 3D geospatial platform with an ecosystem of open standards, including Cesium.JS and 3D Tile technology. Martyn Day talked with Patrick Cozzi, previously of Cesium, and Julien Moutte, Bentley Systems CTO, to learn more about the deal that promises to bring the worlds of digital twins and geospatial closer together

It’s rare that a piece of industry news arrives in my inbox and prompts me to do a double-take. But that’s exactly what happened on 6 September, with the news that Bentley Systems had acquired Cesium. Bentley Systems is the infrastructure giant in the AECO space and is currently undergoing a changing of the guard as the brothers who founded the company take a step back and get a new executive team installed.

frameworks that its solutions provide. Decades ago, the company was the first CAD software firm to integrate a worldbased coordinate system in its file format, because big infrastructure projects are always located somewhere and can span thousands of kilometres.

‘‘

We are transitioning from files to a data centric world, and customers need to have confidence and trust in how the data is going to be handled. We believe in a true open approach, which is based on open standards, open source and open APIs

That can be a distracting process for any organisation. However, new Bentley CEO Nicholas Cumins has made the choice to forge ahead with an industrysignificant acquisition, probably the most impactful since Google sold SketchUp to Trimble.

Julien Moutte, Bentley Systems CTO

Because Bentley has always strongly focused on infrastructure, geospatial has played an important part in the data

Bentley has had agreements with Esri, in addition to offering its own mapping and cadastre-driven tools. However, for the last few years, it has obsessively developed and also acquired technologies relating to 4D construction, reality capture, digital twin and asset management.

Bentley Systems is not the only AEC-

focused software firm to understand the importance of geospatial. When Andrew Anagnost took the CEO job at Autodesk in 2017, he didn’t waste time in forging a partnership with ESRI and starting to back-fill the company’s BIM and civils products with GIS integration. Autodesk also actively chased Bentley Systems’ Department of Transport clients and acquired Innovyze in 2021 to expand into water treatment, another Bentleydominated market. For over five years, Esri and Autodesk have been developing hooks between their products, enabling the movement of BIM and GIS within their ecosystems.

Enter Cesium

So where does Cesium fit into all this? For many reasons, the company hasn’t previously seemed like an acquisition option. In much the same vein as Robert

McNeel and Rhino, Cesium built a business based on community and established a huge network of around one million users, as well as about 10,000 developers, with powerful yet relatively low-cost GIS integration/distribution tools.

Cesium has a deep belief in the open source approach to software, putting a lot of effort into the development of Cesium. JS and its streamable, interactive 3D Tile open format, which supports vector, 4D, 2D, 3D, point clouds and photogrammetry. This is free for both commercial and non-commercial use and has been used to map subsurface, surface, airborne and space environments.

Cesium’s formats have been embraced by the likes of Google and NASA. Its data structures, meanwhile, also support other open formats, such as glTF for model info (asset information), as well as the latest advances in GPU acceleration. While the technology is a perfect fit for Bentley’s digital twin strategy, the culture fit was, on the face of it, incongruous to corporate customers.

Around the time of the 2020 Open Letter to Autodesk, I had a conversation

with Keith Bentley, then Bentley Systems CTO, in which we discussed whether open source applications such as Blender BIM were the way to go. From that chat, I was left in little doubt that this was not Keith Bentley’s mindset, since he was strongly of the belief that programmers’ efforts should be rewarded.

However, he was more inclined to agree that open source data was perhaps a way forward, because for customers, the keys to data must remain in their hands. The whole premise of digital twins requires the assembling of data from all sorts of applications, in many file formats. In that respect, data trapped in proprietary silos is likely to be an industry-wide problem.

In 2021, Bentley opened the iTwin.js library with source code hosted on GitHub and distributed under the MIT licence. Since then, Bentley has been seeking wider adoption of its iTwin data wrapper.

So, while I was pondering Cesium’s culture fit, I hadn’t considered Bentley’s cultural change. Its newfound belief in open data isn’t just a passing fad. It really

is something that the company intends to deliver on.

When considered from that angle, acquiring Cesium takes Bentley closer to its goal of having open format technology that covers all the many different sources of data necessary to build an infrastructure digital twin. That could be scan / photogrammetry data, BIM data, GIS data, asset tagging and management data.

Of course, Cesium offers far more besides. Its super-fast 3D pipeline enables huge datasets to be visualised locally in Unreal Engine, on the web, on a mobile and in real time. The Cesium ecosystem also includes developers who may want to utilise Bentley’s wide array of SaaS analytical and simulation tools or BIM capabilities, and existing customers will be able to view their digital twins in geospatial context for analysis and service planning.

In conversation

In conversation with Cesium CEO Patrick Cozzi, he outlined for me his view of the deal. “Cesium is an open-source project doing massive scale 3D geospatial, open

Digital twin of London’s skyline created by combining Bentley’s iTwin and Cesium’s 3D geospatial technology

source, open standards, large models with semantics on the web. When Bentley found Cesium eight or nine years ago, they opened up our eyes. They said, ‘Hey, Patrick, this is more than geospatial. This can do the built environment. This can do infrastructure,” he said.

He continued: “When we brought 3D Tiles to OGC (The Open Geospatial Consortium) to make an OGC community standard, Bentley was part of that original submission team. This is going back to 2017. Bentley had ContextCapture, now called iTwin Capture, which was some of the first photogrammetry ever put into Cesium. We have had a long relationship with Bentley. We’ve known the Bentley brothers, the leadership team of today, and we think we share a lot of the same DNA. We like Bentley’s interest and commitment and authenticity around open source, open standards, and building a platform which uplifts the ecosystem. We know that that is Bentley’s belief and it’s where Bentley wants to take the future.”

A few days prior to announcing the acquisition, Cesium published a press release regarding its BIM integration technology preview. At that time, AEC Magazine reached out to the company, but they couldn’t answer, because they were busy finalising the deal. Cozzi added, “As you saw just before the agreement, we are doing more in AEC.’” I suspect that Bentley has a raft of technology to assist in that.

some new capabilities around BIM collaboration. We have discussed the limitations we’ve seen in IFC, for instance, and we’ve created BI-model technology, and I think one thing we captured in the room at NXT BLD is that not many know about (Bentley) iModels,” he continued.

“We need to learn how to be better in creating an ecosystem, a community. And I think that’s one of the things that Patrick and his team are going to bring to us. We want to create a thriving developer ecosystem on top of this platform, which is built around open data.”

Cesium has about 60 employees based in Philadelphia. They are mainly programmers, with about 10 people in sales – but sales are mainly inbound and focused on products such as its Cesium Ion SaaS 3D geospatial data hubs, together with large partnerships and hosting. The Cesium offices are located not too far from the Bentley Exton Headquarters. By joining forces with Bentley, Cozzi said that it will allow the company to achieve more industry impact.

‘‘ The huge Cesium ecosystem is consuming Cesium geospatial data. If they are bringing building models in, they don’t have the full detail of what those building models contain. It’s mostly geometry. There’s a lot of value we could deliver by providing access to the metadata of those building models
Julien Moutte, Bentley Systems CTO ’’

Moutte also explained what he thinks Cesium brings to Bentley tech stack. “There are multiple facets here. Obviously, I believe that our users, the engineers of the world, would benefit greatly in having more context when they do their work, and we believe that the best way to provide this to them is by leveraging the Cesium technology.

in, they don’t have the full detail of what those building models contain. It’s mostly geometry. There’s a lot of value we could deliver by providing access to the metadata of those building models. We have capabilities to bring in geospatial analysis, lots of simulation, whether it’s flooding, mobility, structures and visualising all of this in the context of geospatial is also very valuable. It’s a massive opportunity.”

Cozzi added, “BIM could learn a lot from these large GIS models. 3D Tiles doesn’t just stream the geometry and metadata for terrain and photogrammetry, but can also do that for the built environment. Up to this point, a lot of that has been done in OBJ files or FBX, which are standard graphics files. We thought we could help them more, so have built some new pipelines to preserve more of that semantic data, especially the hierarchical nature.”

So, for example, Cesium built a pipeline for IFC, and also into Revit, he explains. “Then we run our super-smart algorithms on the geometry and metadata, and frankly, the metadata might actually be heavier than the geometry. Then, in a domain-specific way, we slice and dice that into 3D Tiles, so that it’s streamed very effectively over the web, placed within a geospatial context,” he said.

“So where do we go next? Bentley brings a ton of domain knowledge about infrastructure and AEC, and we keep saying ‘better together’.”

Conclusion

Cozzi will now head up the iTwin platform developed within Bentley. While the technology benefit has been clearly explained, it also seems that Bentley hopes to preserve and learn from the open-source culture that Cesium has embodied, with its dedication to openness, performance and solving problems for thousands of users in many different industries.

Bentley CTO Julien Moutte then gave me the Bentley perspective, “We are transitioning from files to a data centric world, and customers need to have confidence and trust in how the data is going to be handled. We believe in a true open approach, which is based on open standards, open source and open APIs,” he said.

“The iTwin technology stack is going to continue using more and more open standards. We are looking at creating and enriching the standard ecosystem with

“Today, we’re trying to do this with the iTwin platform, but I think there is a better way to do this when it comes to geospatial data, bringing in Google 3D Tiles, making sure that we can combine and bring all of that data in an environment in an aligned way, using the geospatial coordinates. Whether it’s underground, buildings or engineering models, it’s about more value to our users,” he said.

He concluded: “The huge Cesium ecosystem is consuming Cesium geospatial data. If they are bringing building models

While I still have my doubts about the potential size of market for digital twins within the building sector – insofar as there are so many buildings but the cost of making a twin is high – when you scale up the geo level and start talking about managing assets like roads, rail, national grids and power stations, then digital twins and asset management start to make a lot more sense.

The acquisition of Cesium is a major coup for Bentley, which will not only help it in its historic markets, but also introduce it (literally) to a whole new digital world.

■ www.bentley.com ■ www.cesium.com

“AI is hard (to do well)”, created in Midjourney by Arka Works

Generative AI (GenAI) is extremely promising, but achieving tangible results is more complex than the hype suggests. Keir Regan-Alexander, architect and founder of creative consultancy, Arka Works, highlights the challenges of implementing GenAI and offers practical strategies for professional practice

Have you noticed how AI is often written about in two dramatically different ways? a) it’s a silver bullet or b) it’s a scam.

On the one hand, it’s presented as a Frankensteinian invention that is changing the employment landscape for many knowledge workers. On the other, people are becoming exasperated by hyped-up claims and overpromises being pushed by “corporate grifters”.

But what if neither extreme is wholly true and GenAI proves to be … c) worth the effort and much liked by the employees who use it in the real-world?

This technology is beguiling because it presents itself as friendly and simple. Ask a question and, like popping corn, it comes noisily to life. This sweet, surfacelevel experience is why there is such a widespread conceit on LinkedIn that using Generative AI to solve all your business problems is easy.

Here’s the formula:

You post an eye-catching image and add the hook “I just solved [insert challenging task] in 5 minutes. RIP [insert profession] ��”

People see these posts and they conclude that a whole host of other ideas must therefore also be possible. This is the peak of inflated expectations.

They try to emulate the idea at work, only to quickly discover that the claim was only “sort of” true. When real-world constraints and quality standards are applied to the recipe, the method falls short in some critical way.

• Cue feeling disheartened.

• Cue labelling Generative AI as all hype and writing it off for anything useful.

• Cue pulling up the drawbridge of curiosity and deciding we don’t need to be laser-focused on AI after all.

The trough of disillusionment is reached, and a microcosmic version of the hype cycle is complete.

As we approach two years since GPT-3 went mainstream, we remain at the beginning of the very first innings for GenAI and it’s not productive to try and rush to definitive conclusions about what it all means just yet.

It’s also not productive to spread claims that AI is easy to do well; it’s not. Getting high-quality repeatable results across your org using AI is hard.

Adding yet more software processes to your stack is hard.

Remembering you can do something differently when you’ve been doing it the same way for years, is hard.

The human in the loop

The prospect of a mature AI adoption landscape across industry wide settingswhere complete workflows are delivered end-to-end appears at present to be a long way off. Not least because very few existing organisations have their project and operational data structured and prepared for such change.

But while regular businesses are not really ready, this is what large AI companies are planning for:

“OpenAI recently unveiled a five-level classification system to track progress toward artificial general intelligence (AGI) and have suggested that their next-generation models (GPT-5 and beyond) will be Level 2, which they call “Reasoners.” This level represents AI systems capable of problem-solving tasks on par with a human of doctorate-level education. The subsequent levels include “Agents” (Level 3), which can perform multi-day tasks on behalf of users, “Innovators” (Level 4) that can generate new innovations, and finally “Organisations” (Level 5), i.e. AI systems capable of performing the work of entire businesses. As we progress through these levels, the potential applications and impacts of AI will expand dramatically.”

Stephen Hunter, Buckminster AI

The “Level 4 Innovators” moment appears to be the point at which things will start to feel very different and this projection suggests we are 4-5 major development cycles from it.

While I believe such end-to-end workflows are more probable than not in the coming years, I’m doubtful that widespread automated workflows and decision-making without human oversight at critical steps would be a desirable outcome for anyone, even if it were possible. Dramatic shifts in technology over very short time periods can be ruthlessly inhumane when driven by purely utilitarian priorities - just look back at the Industrial Revolution and what became of the Luddite movement for reference (www.tinyurl.com/luddite-AEC).

Indeed, the “human in the loop” has been essential to every successful imple-

mentation of GenAI that I’ve seen in professional practice. Any high value professional task cannot happen without sound judgement, discretion and (seldom mentioned) good taste. People also take responsibility for outcomes - as Ted Chiang points out, the human creative mind expresses “creative intention” at every moment (www.tinyurl.com/Chiang-AEC).

Until we see evidence that traditional “Knowledge Work” businesses can thrive and compete without this essential ingredient, then GenAI’s job will be to provide scaffolding that helps to prop up what we already do, rather than re-build it entirely.

One process at a time

In the last two years, the hype around AI has risen to unreasonable levels and every software product has been slathered in AI that no one asked for and that rarely works as desired.

But rather than trying to reinvent the wheel with wholesale departmental and budget changes caused by implementing a million new AI apps or hiring a team of software developers, my recent work with professional practice suggests that you’ll likely find it more impactful to refine just one humble spoke of the wheel at a time and to build momentum with incremental but lasting adaptations that support what you already do well.

When you focus on smaller, yet pivotal seeds of contribution from AI and put it in the many hands of your team, rather than mandating prescribed use from “on high”, you can strike a powerful balance: boosting your existing way of working while maintaining individual control, sound judgement and freedom over each productive step. Pretty soon, you will see new and varied species of work evolving across the office.

These adjustment to ‘chunks’ of work need to be done with vocal and transparent engagement about cultural alignment with this new technology.

The formation of an ethical framework for responsible use within the business -that sets guardrails for adoption - is also needed alongside new quality measures to keep standards from slipping, which is always a risk when you make something faster and easier.

You also need a programme for new skills training so that you can equip your team with the resources to go forth and explore for themselves. If Knowledge Work is going to change in the coming years, providing new skills and tools as broadly as you can seems the fair thing to do.

Proving product-market-fit

If you’re doing it right, the productive benefits of AI will be felt by the business but largely flow via the employees who directly use it - employees may find they achieve more in the same time, to a higher level of quality and even with a greater sense of enjoyment in their work.

This is how I find it and I know many more who increasingly feel the same. Anecdotally, I know many employees now hold personal licences to various consumer-facing GenAI tools that they choose to pay for out of their own pocket each month because they bring such high levels of utility and improve their working lives directly.

That’s one of the purest examples of “product-market fit” that you will find, and it wouldn’t be happening unless people had worked out how to really drive value from these new techniques. Imagine paying for software you use for work out of your own pocket, just to

Real-world

task

&

improve your working life.

In these cases, mostly they don’t tell their employers - and I know this because people discuss it openly.

It’s possible that many leaders are simply turning a blind eye because it works for them to. Or this is an outcome caused by a kneejerk AI policy that was written over a year ago calling for the total prohibition of AI on company equipment. In these cases, policies are commonly being ignored and this is the worse outcome of all because then you have uncontrolled use, no privacy controls, likely data leakages of GDPR, NDA and commercially sensitive information out of the business.

Where to focus

Many hundreds of new software products have been marketed to professionals in the last couple of years - lists upon lists of “must have” new names and logos.

I have tested many and I like a number of them. But for every new and well-built

time Breakdown of image creation steps

● 1 Developing a logo design (2 hours)

● 2 Photoshop montage to realistic render (45 mins)

● 3 Rendering design options for two London feasibility studies (4 hours)

Start with a hand drawing and written concept:

 Use Midjourney (MJ) with image-prompting

 Start from a B&W hand sketch as input

 Run variations to generate ~8 similar more refined options that have potential and further refined them in MJ

 Final logo and colours refined and vectorised in Illustrator

Start with project visual created with basic level of detail in Rhino with a simple render from V-Ray:

 Use image-to-image generation in Stable Diffusion (SD)

 Tweak 2no control nets using the same basic render input (1. Canny, 2. Depth)

 Create photorealistic render

 Develop parts of the image further using “In-painting”

 Upscale and refine to 4k size, using another AI tool, Krea

Start with site photos and basic 3D context prepared. Model the proposed building retention and massing, include basic floor levels in model:

 Align camera view of existing with 3D model view of the proposal

 Create relevant editing regions using site photo as a mask

 Upload masking regions, site photo and proposal as linework to SD

 Use image-to-image generation in SD

 Tweak 3no control nets and weights (Mask, Canny and Depth)

 Apply custom LoRA within prompt for target style exploration

 Produce conceptual multiple design options within the massing constraints provided

 Carry out final image editing in Photoshop

 Upscale and refine to 4k size, using AI tool Magnific

 Seed your new image into additional views using IP Adapter

● 4 Interior concept options for a kitchen design (15 mins)

Start with a traditional materials mood board for interior design concept:

 Input image of this mood board and a written spatial description into a LLM to produce a prompt that generates good colour and material descriptions and that will work with Flux 1 (a striking new image model from Black Forest Labs)

 Use prompt with Flux to produce spatial mood imagery for kitchen interior

 Carry out minor edits in Photoshop

 Upscale and refine as before

tool there are many more in the ‘vapourware’ category that fail to effectively solve a real problem for practices or that duplicate something another tool has already done better.

Moreover, the sheer scale of new gadgets and techniques can cause paralysis in businesses who get caught in perpetual “test-mode” and become unable to take any meaningful action at all. The evergrowing size of these lists doesn’t help either - it’s a never-ending task.

To keep things simple and to focus on what is really important there are in my mind two foundational technologies that require our special and ongoing attention with each new release of development.

1. Image models (Diffusion models like SDXL & Flux), these can also now process video.

2. Text models (Large Language Models like GPT-4o & Claude 3.5), these can also now process numerical data.

These two areas alone are enormously deep and novel fields of learning. I recommend getting familiar with how to access and make use of these tools as directly as you can in their “raw” form, i.e. don’t become too reliant on easy-to-use wrappers that do many things for you but ultimately reduce your control - you will find that many of the apps’ features you’ve seen are possible by focusing on the raw ingredients and with only a couple of low cost subscriptions needed.

Image work in practice

My background is in design, and I love to use image models for visual concepts.

I don’t find these tools can solve multistep problems in one shot - instead, I curate their use over smaller discrete chunks of controlled taskwork and weave the whole thing together using a number of well-known tools that I would already be using in traditional practice.

To give you a better sense of what I mean, in the table left you can see some practical examples from the last month at Arka Works.

While doing these chunks of work with teams, we’re exercising our own professional and creative judgments at each step.

What you’ll notice when looking at these approaches is that they are quite complex procedures requiring a high degree of control and aesthetic judgement. Also, that the whole process is being carefully curated by the designer, which is why some people prove to be

particularly gifted at working with diffusion models by following their intuition and others are less so - if the results were just the case of clicking buttons in the right order, this wouldn’t be true.

AI feels different

The introduction of GenAI to these very common practical design tasks is more akin to plugging in a synthesiser or effects-pedal into a musical instrument. In general, I prefer the “instrument” rather than “tool” analogy, which just conjures feelings of crude hammers and nails. By contrast, this instrument is nuanced, unpredictable and can make its own decisions.

My initial expectation about various AI image tools is that they would prove to be a like-for-like replacement for traditional digital rendering methods.

The reality is quite different; this is an entirely new angle with which to approach the same challenge and requires a fundamental shift in the way you think about things.

Indeed, this has been a source of debate between myself and Ismail Sileit, who says about Image Diffusion:

“While traditional rendering techniques are about faithfully replicating reality through precise algorithms, GenAI allows us to engage in a live dialogue with possibility. It’s not so much about rendering accuracy —it’s more about cultivating a relationship with the unpredictable, the emergent, and the profoundly novel.”

Ismail Sileit, architect at Fosters + Partners and creator of form-finder. com (in a personal capacity)

What Ismail is getting at here is that yes — there are certainly time savings, but we shouldn’t overstate these. The main change he perceives is in the speed and breadth of the creative feedback loop itself, the new experimental avenues that we’re able to explore, and importantly the enhanced enjoyment of the whole process.

Images: what’s hard?

Despite what the “AI is easy” influencers tell you, it’s also not a simple process if you want to achieve results you can use on real projects - to exercise real control and achieve usable results in practice you have to learn quite complex interfaces and become familiar with a new lexicon of technical terms like; “denoising”, “latent space”, “seeds”, “checkpoints” and “LoRAs”, to name a few.

Getting the best out of image models also requires a strong dose of curiosity, patience, and a willingness to keep persisting in the face of abject ugliness at times.

When we’re working on this kind of thing our overall hit rate is probably less than 5%. For a recent feasibility study presentation using some very basic internal 3D renders to start things off, we produced 6no total new images across 3 design studies - but when I look back at the results, I can see it took us about 211 separate tests to get there.

While the study produced these images very rapidly, we had to put up with a great deal of mediocre and downright appalling outputs to find something that captured what we were looking for.

A 3-5% conversion doesn’t sound like a great hit rate, and also feels inherently wasteful. When you’re generating imag -

es, your GPU will be getting ready to take off and use a lot of electrical energy and this is an area I’m looking into in more detail currently to better understand the actual energy impact of using GenAI at any kind of scale. Any practice looking to adopt GenAI will probably also need to establish a means of measuring their energy use intensity and as model size increases in the coming years, this will probably become an ever greater area of difficulty.

Text work in practice

AI image work really lights up the right side of my brain, but the AI technology that feels most important to me and that I utilise more than any other - by far - is Large Language Models (LLMs).

My use of text models is growing over time and I’m now at around 10-20 tasks a day for very varied and high-utility responses.

Early on I set GPT as my homepage and I tried to use it for as much as I could and even with this proactive attitude to adjust my working methods, I still spent months forgetting that it was there to help for all kinds of things during the day.

This behavioural change phenomenon may actually be the greatest hurdle in the short term to any kind of meaningful change because workflow muscle memory is strong and the cost of a failed effort during daily working life is high.

The table right shows just a few examples of the type of things we can now attempt with a fair expectation of success. With the latest release of models and the next wave just around the corner, the use-cases will be only growing in the coming months and years.

Real-world task & time Breakdown of Large Language Model (LLM) steps

● 1 Reformatting an invoice schedule (3 mins)

● 2 Large report summarisation tool (15 mins)

I’ve received a fee proposal with a complex invoicing schedule in a chart format spread out over many stages and months. I need it in a vertical format suitable for issuing to a client, this can’t be done in Excel.

 Using a private LLM configuration, I remove any identifiable data and convert the received schedule into a markdown table.

 I request the table be reformatted and define the column headings I need.

 Spot check results, to confirm correct.

 I copy the table into excel, add back redacted details, and complete.

I’ve received three very long publicly accessible government reports that I need to digest quickly, they relate to the scope of regulation of AI in the EU. I’ve scanned them quickly, but now I need to pull a one-page summary together for a policy document for a meeting.

 Convert the PDFs to plain text.

 Compose a pre-made prompt separated by context, role, instructions and documents (where I place the full-length text), separated by tags.

 Use large context window model via API (Claude 3.5) to paste the complete pre-made prompt.

 Spot check results, to confirm correct.

 Final edit before adding to my policy draft.

● 3 Competition bid cost estimate

(15 mins)

● 4 Bid writing from past examples (10 mins)

A client has found a public design competition invite. They are tempted go for the job but they want to know how much the competition stage will cost.

 Using a private LLM configuration, I export the PDF to plain text and create a prompt explaining the task and introducing the project information.

 I ask for summary of key project requirements and submission format and dates in table form suitable for a memo.

 I ask the chat to calculate the working days between dates.

 Spot check results, to confirm they are correct.

 I provide a written team plan for the bid with at-cost day rates.

 Using Code-Interpreter, I represent this key data to a new chat using a Chain of Thought style prompt and showing each step of it’s working.

 The python script returns a ballpark estimate of cost. I’m happy with the working method and we decide the competition is too costly to go for.

A client needs to write a 500-word bid response for a new RfP quickly. They have answered this type of question many times before, but not in this exact way.

 Provide context and role to Claude 3.5, provide new question and 5 step instructions and good example of past answers to reference.

 LLM analyses the question structure and produces a plan for how to best answer each section (chain of thought method).

 LLM then maps and rewords previous ‘good’ answers and case study examples to the new question.

 Refined versioning by providing critical feedback. “65% version” is then taken to InDesign for final editing and completion by Project Director.

(Above) Precision editing: Example workflow for a warehouse building refurbishment, created in Stable Diffusion (credit: Arka Works)

(Left) Custom Styles: Arka-ArchPainter-XL, a Stable Diffusion Custom LoRA (credit: Arka Works)

‘‘ The most effective AI strategies in professional practice that I’ve seen aren’t mandated from on high, but rather emerge organically from teams who are given the freedom to experiment and trust to exercise their individual judgement about what to do with the results ’’

Opinion

“Next-generation AI models are effectively embargoed until after the US election on 5th November, but expect to see significant gains in reasoning ability and intelligence when they arrive, with each of the main providers currently training / testing models more than an order of magnitude larger than the current largest & most-intelligent models.”

Text work. What’s hard?

There is no shortage of people expounding the virtues of LLMs, but let’s be honest about the difficulties.

“Multi needle reasoning” problems. These happen when you ask the model to retrieve a number of separate facts at the same time and then to apply further layers of reasoning and logic on top. When you request too many needles and too many processes, you may get a few okay results, but it’s possible to ask too much of the model at once and this can result in disappointing or incomplete results. There are ways around this that involve breaking down the task into smaller steps and just giving the model what it needs at each step, hence why you need to learn good prompt craft.

Accuracy. You still have to always spot check the results and validity of your workings. I call it spot check, because with the latest models if it’s set off on the right track it will usually get things right. Hallucinations remain a challenge, albeit the rate of occurrence has been dramatically reduced since I first started using them nearly two years ago, such that errors are increasingly hard to identify. This is why spot-checks must remain part of your core approach for now.

2022

Q: Who was the sole survivor of

A: The sole survivor of the Titanic disaster was Charles Joughin. He was a British baker and the chief baker aboard the Titanic. Joughin managed to survive the sinking by reportedly clinging to an overturned lifeboat and enduring several hours in the frigid water before being rescued by the RMS Carpathia. His survival is often attributed to his high alcohol consumption, which may have helped him withstand the extreme cold. Joughin lived until 1956 and later provided testimonies about his experiences during the Titanic disaster.

2024

Q: Who was the sole survivor of Titanic?

A: There was no sole survivor of the Titanic disaster. The RMS Titanic sank on April 15, 1912, and out of the approximately 2,224 passengers and crew on board, around 710 people survived. These survivors were rescued by the RMS Carpathia which arrived at the scene after receiving the Titanic’s distress signals. Survivors included men, women and children from various classes and backgrounds. Some notable survivors were Molly Brown, who later became known as “The Unsinkable Molly Brown” for her efforts in helping others during the disaster, and the ship’s designer Thomas Andrews, who perished despite his efforts to save others ?

Now and Then. A popular example of a hallucination from 2022. When you ask the same question today you get excellent performance and data. However, the final sentence suggests that Thomas Andrews was both a notable survivor and also perished

Long prompting is now better than using Custom GPTs for most tasks. Now that we have large context-window models available we can work with hundreds of thousands of words for input. For most tasks, pasting your prompts in long-form into a model is now a more robust approach than Custom GPTs if you need very reliable performance. This is because Custom GPTs typically run on a smaller context window (32k) and also involve RAG, a process that cuts your data up into small chunks that are later retrieved, without needing to store them in full within the knowledge base. If the wrong chunk is retrieved, you will get a sub-par answer. Many practices have found use for Custom GPTs, but it’s now better to move towards a long-prompting method and make full use of these amazingly spacious context windows.

Data preparation. Most businesses are excited about the potential of AI, but they aren’t excited about preparing all their reusable project and operational data such that they can use it again and again to produce first draft written reports of many kinds. There is a surprising amount of written work in practice that could be massively aided by GenAI if this data were ready. We really need to start thinking about building asset libraries that are ready for LLMs to process and link together in new ways to really feel the benefits.

Conclusion - team led AI

The most effective AI strategies in professional practice that I’ve seen aren’t mandated from on high, but rather

emerge organically from teams who are given the freedom to experiment and trust to exercise their individual judgement about what to do with the results.

This team-led approach to AI adoption, where workers take responsibility for how they wish to use it, allows for a more nuanced integration that respects existing workflows while uncovering new efficiencies and enthusiasm from within the team.

Consider building a “heat map” of opportunities within your organisation. Where are the places that these ideas really fit without trying too hard? Once you’ve identified these hotspots, tackle them one at a time. Small, incremental changes often lead to the most sustainable transformations.

AI is neither a panacea nor a parlour trick – it’s a curious instrument that, when wielded with skill and discernment, can help us work smarter, faster, and perhaps even with greater enjoyment.

We are still so early. So, I urge you to withhold judgement; be experimental and be curious.

■ www.arka.works

About the author

Keir is an architect operating with one foot in architectural practice and one in the development of Generative Design and AI tools and workflows. He founded the creative consultancy, Arka Works in 2023 following a directorship at AJ100 practice Morris+Company, with a mission to prepare the profession for AI-driven change. He does this by helping architects, clients and startups to effectively apply the latest Generative Design and AI tools to the work they already do in practice, so that they can adapt to a rapidly changing professional landscape.

Diagram demonstrating context window size of popular models. The AI Eu Act is 116k tokens, or ~600,000 characters, or 120,000 words in length and can fit into large models with lots of room to spare
(credit: Arka Works)
Titanic?

Real-time insights for sustainable architectural design.

Enscape 4.1 sets new benchmarks in architectural software by merging aesthetic capabilities with practical, performance-oriented features and a brand-new add-on. The latest version empowers users to create visually stunning designs while ensuring they are viable, energy efficient, and more sustainable. Enscape 4.1 includes:

Enscape for Windows is available for: Revit, SketchUp, Rhinoceros, Archicad, and Vectorworks
Enscape for Mac is available for: SketchUp, Rhinoceros, Archicad, and Vectorworks

Darwinism in AEC technology

The productivity gains to be had from current AEC software options may be close to exhausted. To adapt and survive, the industry should instead be focusing on knowledge-based expert design systems, writes Richard Harpham

Today, most designs are produced in software solutions specifically built for BIM model and drawing production and intentionally designed to accommodate all building types.

While these legacy systems have replicated and replaced the manual tools that architects used for centuries, providing more accurate and better coordinated digital methods along the way, it’s still pretty much a case of the same processes as before, but now in CAD and BIM software.

In other words, all the knowledge and expertise for a specialised building’s needs and purpose still must be in the designer’s head. These building-type specialisms have been around for a long time, along with expert understanding of their legislative needs.

What’s changed is that the industry is now questioning what comes next. Many software customers are no longer confident that a faster, vanilla BIM solution will provide the productivity gains needed in a new era of expert intelligence tools based on AI and machine learning (ML).

As Charles Darwin is often purported to have said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.”

Darwin’s theory of evolution by natural selection posits that organisms evolve to adapt to their environment to survive. Those that are better suited to their environment — whether through physical traits, behaviours, or abilities — are more likely to survive and reproduce, passing on advantageous traits to future generations.

In the context of architecture, Darwinian principles can be applied metaphorically to the process of design specialisation. Specialisation in architectural design can take many forms, including focusing on specific types of buildings, particular construction methods, or specific professional skills.

Just as species adapt to specific ecological niches, many architects will increasingly evolve to carve out their own niches in the design world, aiming to become the ‘fittest’ in their chosen fields. Of course, there will still be ‘generalist’ architects, but just as Darwin described, the divergence of some – or many – could form a new species of firm that proves to be tremendously more successful in a specialised segment.

Playing catch-up

Architectural design software hasn’t been adapting as fast — until recently. Now, we’re seeing a host of software start-ups

emerge that are betting their growth strategy on narrowing the capabilities of their software. The thinking here is that, by targeting a deep and narrow segment of the design market, they might become a leading solution in just one or two building types and thus carve out a profitable, differentiated business for themselves from the +$13 trillion construction industry.

1 High-rise mixed residential model created with Skema in SketchUp

2 Model at the heart of a collaborative presentation at NXT DEV 2024, which showcased the development of a school project from concept to drawings, joined together by new software tools: Skema, Snaptrude, Augmenta, Gräbert, with Esri GIS (see presentation at www.nxtaec.com)

A strong catalyst for this trend is the rise of AI and ML. AI has the potential to automate many aspects of the design process, such as optimising building layouts or generating construction plans, but until ‘general AI’ emerges, it requires a narrow set of parameters to work successfully.

AI technologies could help architects specialise more deeply in specific building typologies, executing common, simple, and repetitive design tasks and unlocking time for architects to focus their professional skills and ingenuity on

the more aesthetic challenges that differentiate their designs.

Many people in the industry expect that the software companies and their customers who carve out specialised areas of excellence will reap tremendous value and profits much more quickly than their competitors. At DPR, production design leader Charlie Dunn puts it this way: “By focusing on a building type, and then starting with prefabricated elements for that building type, we can simplify and accelerate our design solutions. The problem set is smaller, and the iteration loop accelerates to support the desired new DfMA flows.” So, what are some of the forces driving architectural specialisation? To my mind, the list looks something like this:

1. Technological advancements: New tools such as parametric design, BIM and 3D printing can transform the

way architects approach their work. However, these technologies require specialised knowledge and skills, steering architects to focus more on specific technological applications with specific building types.

2. Sustainability and environmental design: The growing focus on sustainability and environmental responsibility has led to the rise of architects specialising in green building practices, such as passive solar design, energy-efficient systems and/ or the use of sustainable materials.

3. Urbanisation and population growth:

As global populations continue to grow and urbanisation accelerates, cities face challenges such as housing shortages, transportation congestion and the need for resilient infrastructure. There are opportunities here for architects who specialise in multifamily buildings, educational institutions and hospitals, for example.

4. Regulatory and safety requirements:

Certain building types, such as healthcare facilities, schools and industrial plants, must comply with rules around safety, accessibility and functionality. Architects need a deep understanding of the relevant regulations and codes to meet the necessary standards.

Benefits of specialisation

Architects and software suppliers that embrace this specialisation trend, meanwhile, might experience several benefits.

For example, specialisation allows architects and their software suppliers to develop a deeper understanding of specific design challenges. This enhanced knowledge enables more effective and innovative solutions to emerge. Specialised architects can address complex design challenges more effectively than generalists, while increasing the likelihood that their software suppliers can leverage specialised segment data for machine learning and AI methodologies.

As firms develop professional, profitable reputations as leaders in specific fields, there should be a clearer route to market differentiation and profit. Even if we only consider the multi-family, hospitality, education, medical and datacenter segments, we are already describing a market that accounts for more than 60% of all construction globally.

For architects, a narrower and deeper focus should lead to more valuable solu-

tions that support higher fees and greater client demand. Corresponding software start-ups should also find it easier to describe to investors their potential growth and market share opportunities in specific markets.

Specialisation also creates increased opportunities for collaboration between architects, engineers and contractors, and between architectural firms in multi-use projects that feature different building types. Earlier collaboration between stakeholders with higher knowledge of their specialised building typologies will significantly improve the delivery of predictable construction outcomes and address multiple opportunities for improvement during the construction process.

Opportunity or inevitability?

We are living in a time where if you can imagine something innovative and new, it’s highly likely that somebody is already working on it and that it might even emerge soon.

For example, will AI-controlled robots predominantly assemble buildings? This now seems inevitable. And will architects design whole buildings from a flexible palette of factory manufactured assemblies that can be sent to those robots? Of course!

We may not know how quickly we’ll get there, but the new software and technology needed to make it happen will likely do a better job of supporting specialised needs than the current crop of BIM software from Autodesk, Nemetschek, Trimble and Bentley Systems.

So, what should we conclude from these trends and thoughts? Well, the current trend toward architectural specialisation certainly reflects a Darwinian process of adaptation to the changing building environment. There is every indication that architects and their software suppliers who specialise in specific building types, technologies or design approaches may be better positioned to survive and thrive in this increasingly competitive market.

We’re increasingly told that the only jobs that AI will be unable to replace are those that rely on empathic human interaction, complex and specialised problemsolving and unique creativity. It’s time to elevate architects’ skills in these three areas and leave the rest for machines to handle. That’s the best way to ensure that the architecture profession continues to be important, engaging, profitable and, above all, necessary. ■ www.skema.ai

A new generation of specialised software start-ups

In line with increased design specialisation, we are seeing several software start-ups emerge that focus on specific building types, rather than developing generic design platforms. They include:

Hypar: This California-based company has recently sharpened its focus on assembly-based design solutions for hospitals and other medical projects. www.hypar.io

HighArc: With a particular focus on delivering a full design/spec/build solution for builders of single-family homes, HighArc still has potential to move to adjacent market segments. www.higharc.com

Testfit: Predominantly targets the early design needs of real-estate developers, with a focus on specific building typologies and infrastructure segments such as car parking. www.testfit.io

Parafin: Built specifically to support the rapid creation of optimised designs, budgets and investment models for hotel chains. www.parafin.ai

Skema: This company, of which I am a co-founder, helps architects develop catalogues of unitised, modular design assemblies from previously successful projects, which can then be used to deliver new schematic designs. This provides more detail and assurance of the constructability of those schematic designs, as all the detailed BIM intelligence is already included, dramatically accelerating the delivery of BIM deliverables. www.skema.ai

Graphisoft’s future direction

In the shift from BIM to BIM 2.0, big changes are underway at Graphisoft. Martyn Day chats with new CEO Daniel Csillag about his plans for the company

To Graphisoft observers, the company’s recent CEO transition looks like one of the industry’s more abrupt handovers. In February 2024, Daniel Csillag was named as Graphisoft’s new CEO, replacing Huw Roberts, who had been in the role since 2019. The change comes hard on the heels of the earlier appointment of César Flores Rodríguez as chief division officer for planning and design and digital twin at Nemetschek Group in July 2023.

Between 2017 and 2022, Csillag was general manager of Bluebeam and CEO of Nevaris, two other Nemetschek brands. He had subsequently spent almost two years working outside of the group, in the role of chief revenue officer at Thinkproject, a cloud-based asset lifecycle platform. With these experiences under his belt, Csillag is both a Nemetschek company insider and someone with considerable experience of cloud-based SaaS offerings.

When asked whether applications should be desktop or cloud-based, Csillag takes the middle way, in keeping with Graphisoft’s hybrid view.

“The desktop is here to stay, and the desktop will probably never disappear,” he says. When he was running Nevaris, he reflects, he was dealing with a very heavy ERP and estimating solution and, even then, a desktop offering was available, even though it was based on Microsoft Dynamics.

“While the cloud was central, it will always be a hybrid solution. Customers will be able to pick a 100% cloud solution, or prefer a combination of cloud and desktop. However, it will never be desktoponly anymore. Those times have gone.”

Looking at the opportunities that Graphisoft faces, one might think that

the BIM market is already highly penetrated and saturated – so where does Csillag intend to focus the company?

“Selling more to the same people is certainly one option, but finding new services is another, which are very likely to be cloud-related,” he says. “If today, we discuss a product we want to develop, we don’t discuss that as a desktop application. We always discuss it in a cloud environment, because the rest of the world will slowly migrate to more cloud. related services.”

At the same time, Graphisoft is still seeing very high growth rates in particular geographies, he says. The footprint of Graphisoft is very fragmented, with a heavy footprint in some markets (particularly Germany, Austria and Switzerland), but plenty of opportunity left to exploit in the US and in southern European countries, including Spain, Portugal and Italy.

“And even in a saturated market, where users already have a solution which they work with, those users may think what we have is really great and recognise it can help them,” he adds. “Revit users take us as a second solution into their portfolio, because they can do things with Archicad that they can’t typically do with Revit. I think the combination of Revit together with Archicad works well and is getting more and more popular.”

Market trends

In April 2024, Autodesk and Nemetschek signed an interoperability agreement to improve collaboration between their tools. This will enable Nemetschek to access Autodesk Platform Services (or APS, formerly known as Forge) and improve data flow. The concept of Archicad as a second seat might

not seem so far-fetched with better integration. As Csillag comments: “We absolutely believe that the deal is helping us. It’s also helping Autodesk, because they have so much pressure for openness from their customers. At least this pressure is going away now.”

In some situations, bigger firms (50 seats, for example) may stick with Revit but purchase 10 seats of Archicad on top, he says. “And there are some markets where we really can replace other players and not necessarily always Revit,” he continues. “We have certain markets where we have significant growth potential of 20/30/40% every year. At least, we have that now. It might not be the same in five years’ time, but right now, we have that, because we are underrepresented in certain markets. Italy is a good example. We tripled our size in Italy over three years.”

Like all software companies, Nemetschek is under pressure to shift from perpetual licences to subscriptions and services. For Graphisoft customers, says Csillag, “it’s an optional choice. We are watching the market. We’re seeing how fast the transition goes, because our interest is to have an as-fast-as-possible transition, if only for the simple reason that we want to focus on one business model.”

For now, the company maintains two models, he adds, to give customers some flexibility and the opportunity to get accustomed to a new approach.

“But at a certain point in time, we really must accelerate this transition to a subscription model, and that will happen. I’m not stating that will happen tomorrow, but it will happen at a certain point, if we reach a certain transition status. For a software player in a saturated market, the classic perpetual licence model is not sustainable.”

■ www.graphisoft.com

Snaptrude advances Software

With every passing month, cloud-based BIM 2.0 applications flesh out their features in pursuit of victory over the current desktop BIM tools. With Snaptrude at the vanguard of this movement, Martyn Day caught up with the company’s CEO Altaf Ganihar to discuss adoption trends and product roadmaps

In a mature market, developing a new generation application involves playing a long game. Even if you are already the market leader, it’s hard to compete against your own, widely adopted products, because of customer inertia and legacy concerns.

So spare a thought for BIM 2.0 startups with far fewer revenues and much smaller customer bases, who face an uphill struggle but are still aiming to reinvent modern design software.

Among the firms striving to compete against Revit as a pure-play BIM platform are Arcol (www.aecmag.com/tag/arcol), Qonic (www.aecmag.com/tag/qonic) and Snaptrude (www.aecmag.com/tag/snaptrude) with at least three more still in stealth mode and likely to emerge in 2025.

Arcol, Qonic and Snaptrude have entered the market with limited but carefully targeted feature sets, designed to be of immediate use in existing workflows. At first glance, it might appear as if they are aimed firmly at conceptual design. But make no mistake: all three have an eye on the long game. All three intend to become leading BIM platforms.

Modern computer languages, advances in web graphics and distributed cloud compute are bringing a new generation of tools to the market. When compared feature-by-feature to industry-leading, desktop-based BIM tools, these next generation tools may, at first, look quite limited. But the advantages of cloud delivery mean that new features can be streamed and added rapidly, giving incredible development velocity. Parity of features, depth of capability, and width for edge case designs

could take just three to five more years of focused development.

The BIM 2.0 company with the most venture capital backing ($21.8 million) and the biggest development team is Snaptrude, headed up by CEO Altaf Ganihar. So far this year, Snaptrude’s BIM SaaS application has seen 26 releases, with another 16 planned before November 2024.

The story so far

At AEC Magazine’s NXT DEV event, an industry figure who will go unnamed described Revit as “1970s thinking delivered in 1980s coding.” With many mature users demanding more updates and renewed software architecture, Autodesk CEO Andrew Anagnost stated there was to be no next generation of Revit, or in his words, “no faster horse”. Instead, Anagnost presented the prospect of a cloud-based, data-centric tool called Forma. So far Autodesk has delivered a concept engine, which we believe will be fleshed out to become a cloud hub for what comes next in AEC at Autodesk.

Meanwhile, Autodesk is busy architecting a data bridge between the Revit desktop and its Forma cloud back-end.

Snaptrude, meanwhile, has been developed to work and play well with Revit’s RVT file format and to find a niche in today’s BIM workflows. With its current feature set, it best fits workflows that involve rapid concept modelling and loading an RVT into the cloud for collaborative working with mark-up and editing capabilities.

Snaptrude is especially

useful in space planning, but not as an end-to-end solution. For now, you would have to go back into Revit for coordination and documentation using the bidirectional link.

However, Snaptrude’s development is, by any measure, happening at high velocity. Ganihar compares his extensive development team to six internal startups, all working on different areas of the programme and delivering continually. Now that those developers have mastered the quality assurance that this parallel process demands, they can create a new release every week. And they do all this at the same time as supporting customers and using their feedback to drive the development cycle.

Mastering rapid development means that Snaptrude claims it will be able to deliver schematic design tools by the end of the year. This development may take most of next year to perfect, but the expansion enables Snaptrude to start work on developing different disciplines, such as structural and MEP capabilities.

The end goal for Snaptrude is big, Ganihar claims. “We want to be the OS for construction, which means using the same model to do clash detection, create the drawing, the quantity take-offs, costing take-offs, to produce the bid documents. We can actually do the full gamut, because we are not based on files. It’s a single database. And each one of these is just an instance of our database representation.”

But Snaptrude isn’t just an in-thecloud Revit clone. The philosophy of the software is to drive productivity through automation of workflows, both micro and macro. As Ganihar states, “Next year, I think we’ll be very close to completing the schematic workflow. While we can replace like-for-like, we are also automating the design process. We are automating concept modelling, the visualisation and the drawing process.”

Growing portfolio

1 Rhino geometry imported into Snaptrude 2 2D drawings is a major focus for Snaptrude

3 & 4 Snaptrude’s AI rendering technology uses Stable Diffusion

So far, Snaptrude’s customers have started pilots or used the software on live projects because of its promise to cut nonbillable hours. This is achieved by reducing the number of buttons that need to be pressed; the number of applications required; and the speed of output, whether the deliverable is an area model, a presentation, a render or a drawing. The target benefit that Ganihar cites most frequently is one-tenth of the time.

Snaptrude is also deploying AI. “The most successful feature we launched recently was area modelling. You can bring an area list from a spreadsheet programme and it automatically creates the blocks. Then you can automatically block and stack them, based on adjacencies, into a building footprint. This is done through physics-enabled AI, not the GPT kind. It’s logically solving the problem using daylight simulation, adjacencies, etcetera,” he says.

Snaptrude also has its own AI rendering technology, which utilises Nvidia graphics and Stable Diffusion, bypasses 2D-to-3D conversion, preserves a design’s geometry and is less likely to ‘hallucinate’.

Drawings, however, is the most-often demanded feature. While Snaptrude is developing its own automated drawing capabilities such as auto-dimensioning, auto-tagging and auto-labels, the company has also licensed the DWG engine of Graebert, developer of Ares CAD, which is one of the most widely used DWG engines in the industry (as seen in DS DraftSight, Solidworks and Onshape).

Ares is designed for the cloud, desktop and mobile, which means Snaptrude can easily connect to it via API. The first stage will be a connection between a live link between Ares and Snaptrude. Graebert has also been working on autodrawing technology to accelerate drawing production. It looks like Snaptrude will be first to bring that to market in a BIM context

both through its own internal work and through partnerships.

Ganihar sees this as a key technology to add and sees potential adoption in markets like Asia where Graebert Ares is popular, and AutoCAD and Revit LT are under market pressure. Snaptrude has managed to sign up Datech Solutions (part of TechData and distributor of Autodesk products) for distribution in Asia Pacific. Ganihar explained that there is great potential in cost-sensitive markets.

What’s next?

Snaptrude’s next big feature release should be live in October. It will support presentation drawings, instant renders and enhanced graphics. There have been improvements to real-time data for areas, costing and take-offs, as well as interoperability advances supporting Revit and Rhino, better geometry and BIM capabilities. This work connects the workflow from early RFPs to schematic designs.

Along with feature updates like coloured and monochrome massing, better snaps, drawing offset mode, select to move, offset and voids on masses, the Sketch-to-BIM capability has been enhanced to support Revit families and curtain walls.

In workspaces, imported Revit files will have families extracted and added to your team library with a single click. These can be managed from a single space. There will now be a command line (because CAD folks will never give this up!), but this will be AI-enhanced and more powerful.

In drawings, labelled drawings can now be created directly within Snaptrude, along with the ability to generate sheets in various sizes and scales and export them as PDFs.

Export of models to Rhino is performed through fbx or obj exports. In parametrics, curtain walls and stacked wall parametrics are now supported. The new parametric wall graph will enable parametric editing in BIM mode for walls, floors, slabs in massing mode and on imported Revit projects. The graph will form the foundations to support other parametric elements in future releases. Snaptrude is working towards a philosophy it calls ‘LOD hopping’, which means that, at any stage, the user can hop to a different LOD level to make design iterations, without

‘‘

tive to replace existing seats. Core BIM code is being rewritten. BIM formats are being granularised, collaboration is being built-in from the ground up and, in various ways, automation is going to play a big role in reducing the time spent doing donkey work.

While end goals may still be documentation and drawings, ways of getting there are going to be less distributed and more self-managing (see Richard Harpham’s Darwinism in AEC technology article on page 28).

Snaptrude is working towards a philosophy it calls ‘LOD hopping’, which means that, at any stage, the user can hop to a different LOD level to make design iterations, without having to worry about corrupting the model or downstream

In AI rendering, users will be able to create photorealistic renders and artistic graphics for their designs while respecting geometry, and soon, materials. The system is intelligent enough to understand BIM data, which Snaptrude plans to leverage in future releases to enhance quality.

There’s a new Rhino import capability, too, which uses a dedicated manager. Users can bring in geometry from Rhino, edit geometries currently supported inside Snaptrude and eventually take the detailed model to Revit for further documentation.

having to worry about corrupting the model or downstream.

Conclusion

With the industry looking to collectively move from desktop file-based design documentation BIM solutions to collaborative, cloud-based BIM databases, there is a lot at stake for the dominating industry players — Autodesk, Nemetschek, Trimble, Bentley Systems — especially with so many interesting start-ups that make their ultimate objec -

For Autodesk, it’s going to be especially interesting, as the Forma vision needs to be fleshed out and expanded beyond concept to indicate that the company is working on a broader development. Every time Autodesk has addressed next-generation BIM, the company has acquired the technology (Softdesk = Architectural Desktop, Revit = Revit).

Autodesk purchased Spacemaker in 2020, which then became Forma. But Spacemaker wasn’t a cloud BIM tool; it was for conceptual, early-stage design. To scale that up to being a cloud BIM tool will take time and resources. We will have to wait and see what’s shown at Autodesk University this year to rate Forma’s development velocity.

But whatever happens, it’s clear that the competition – including Snaptrude –certainly isn’t hanging about.

■ www.snaptrude.com

Snaptrude is designed to play well with Revit

Interview: Vectorworks CEO

For the 2025 release of Vectorworks, we caught up with company CEO, Biplab Sarkar, to talk new features, moving from file to cloud databases, auto-drawings AR, openness in BIM and, of course, AI

AEC Magazine: How would you define the overarching theme for the 2025 release? (there seem to be a lot of features aimed at simplification of tasks)

Biplab Sarkar: Vectorworks 2025 unleashes a new world of visual understanding and communication for designers. This latest release introduces new workflows and tools that empower designers to bring their visions to life easily, saving time and enhancing efficiency throughout every design phase. Top features like Onscreen View Control provide easy, instant access to all views of a model, along with a click-dragging functionality that simplifies the process of orbiting models, making design adjustments more fluid and intuitive. The Two-Point Perspective feature allows users to create traditional architectural compositions and professional photography perspectives with a single click. Additionally, the Object Level Visibility feature gives users the power to manage the visibility of specific objects within a design, offering options to show, ghost, hide, or isolate individual objects, thus providing greater control over complex projects. The Vectorworks Cloud Document Reviewer further enhances collaboration by allowing customers to view and comment on documents from anywhere, streamlining the review and design process.

Perspective and data visualisation that supports object visibility. These enhancements enable designers to work more efficiently and creatively in a way that truly mirrors their needs and expectations.

AEC Magazine: Collaboration comes in for a major update, easing working with Revit data, DWG/ DXF and project share capabilities. Could you outline these advances?

‘‘ Many BIM platforms promote openness but can still restrict data exchange and favour their own ecosystems, leading to partial interoperability ’’

Biplab Sarkar: Vectorworks 2025 includes improved Revit and DXF/DWG collaboration. With these improvements, users can conveniently process Revit file exports in Vectorworks Cloud, including support for older Revit formats, without interrupting their workflow. Additionally, you can save time with continuously high-quality DXF/DWG collaboration, offering detailed and flexible control of incoming and outgoing file structure and graphics to achieve accurate delivery the first time.

Project Sharing enhancements allow for better convenience and efficiency. With direct setup options, you can choose whether to use cloud-, network-, or server-based sharing and begin your work. Shared project files can be worked on and stored on any cloud service, and backups are automatically saved as offline files.

enlighten us on how you think this could be done better / when we can expect to see something from Vectorworks?

Biplab Sarkar: The shift from traditional file-based workflows to cloud-based granular workflows is a pivotal change in the design industry. It is driven by the need for real-time collaboration, better data accessibility, and streamlined project management across teams.

Vectorworks embraces this transition by developing advanced cloud-based solutions that facilitate more interconnected and accessible project environments. Enhancing these workflows involves enabling real-time, multi-user collaboration with instant updates, implementing granular access controls for secure role-based interactions, and ensuring seamless integration with existing tools and processes.

We are already making strides with cloud offerings, such as Vectorworks Cloud Services, the Vectorworks Nomad mobile app, and the newly released Cloud Document Reviewer, which lay the groundwork for more robust cloud capabilities. We have also been focused on processing different file formats in Vectorworks Cloud Services. In Vectorworks 2025, we introduced the ability to process all RVT exports via Vectorworks Cloud Services. We will be working to expand this capability to other file formats such as DWG, IFC, USD, etc. Supporting file formats for both import and export will lay the foundation for a cloud-based collaboration workflow.

Overall, Vectorworks 2025 redefines the design software experience by simplifying tasks and fostering better collaboration. While tasks are simplified, this release also adds significant depth to existing functionality, such as the management of room finishes and countertops, as well as to new features like persistent Two-Point

These collaboration advances enhance the overall efficiency of working within multidisciplinary teams, supporting a more integrated and cohesive design process, saving time, and improving project outcomes.

AEC Magazine: The AEC industry is still working in traditional file-based workflows, but there are now moves to cloudbased granular workflows. Could you

As cloud adoption grows, users can expect Vectorworks to expand its cloud capabilities further, providing comprehensive solutions that enable more granular, accessible, and collaborative workflows.

AEC Magazine: There’s a lot of talk about auto-drawings, with firms like Graphisoft, Graebert, Autodesk, Bentley and SWAPP all now trying to apply AI to deliver productivity enhancements. Is Vectorworks looking at this area and what other applications of AI can users expect to see?

Biplab Sarkar: We are keenly aware of the growing interest and advancements in auto drawings and the application of AI to enhance productivity within the design and construction industries. We are actively exploring this area and evaluating how AI-driven technologies can be integrated into our software to streamline workflows and improve efficiency. The potential of AI to automate drawing tasks, predict design outcomes, and facilitate more intuitive interactions holds great promise for enhancing our users’ experience.

In addition to the AI Visualizer, we are looking to expand AI capabilities in our software to allow users to search Vectorworks learning resources using natural language for faster support.

AEC Magazine: AI and ML promises much but needs to be trained. In developing these tools, what kind of data will you train on? Customer data obviously has IP issues, as well as possibility of errors. Do you think there will be generic AI and ML tools, individual customercentric AIs, or both?

seen more professionals embracing AR to streamline workflows and make more informed design decisions.

The evolution of ‘meta’ toolkits and augmented reality technologies presents exciting opportunities. These technologies will transform how projects are conceptualized, designed, and executed as they advance. Integrating advanced AR capabilities with BIM and other intelligent technologies, along with support for platforms like Nvidia Omniverse, will likely lead to even greater levels of interaction and visualisation, driving innovation in the AEC industries.

AEC Magazine: Cloud is an increasingly important platform, especially for collaboration. With many start-ups now promoting the Figma pure-cloud play (in browser or thin client), how do you see the future of desktop applications and are there advantages to desktop apps vs cloud-based ones?

Biplab Sarkar: We envision a dual approach regarding the future of AI and ML in our field. On the one hand, generic AI and ML tools will provide broad capabilities applicable across various use cases, offering solutions that can be widely implemented and adapted. On the other hand, we recognise the value of developing customer-centric AI solutions tailored to individual clients’ unique needs and challenges. Additionally, with customer consent, Vectorworks can leverage anonymous session log files containing usage data to predict customer usage patterns, enabling a predictive design system. By balancing both generic and bespoke AI tools, we aim to deliver compre-

AEC Magazine: Openness is something that is talked about a lot today. With your extensive experience in the market, on a technology level, are today’s leading BIM developers truly open with their data, toolkits and APIs? The days of reverse engineering file formats may be coming to an end, but will we have interoperability without dumbing data down to lowest common denominator industry formats?

apps vs cloud-based ones?

As cloud technology

in fostering collaboration

Biplab Sarkar: As cloud technology continues to evolve, its role in fostering collaboration and enabling remote work has become increasingly significant. The rise of cloud-based platforms, such as Figma, underscores the growing preference for flexible and accessible solutions. However, the future of desktop applications remains relevant and valuable.

Desktop applications offer certain advantages that cloud-based solutions may only partially replicate. They often provide more robust performance, enhanced security, and the ability to work offline, which can be crucial for users in environments with limited or unreliable internet access. Desktop apps also typically offer more comprehensive features and customisations that can be tailored to specific workflows and user preferences.

In the coming years, a hybrid approach will likely become the norm, where the strengths of both desktop and cloud-based applications are leveraged to meet diverse user needs. This approach allows cloud collaboration flexibility while retaining desktop software’s robust capabilities and offline reliability. Integrating both platforms will enable users to benefit from a seamless and efficient workflow that maximises productivity and collaboration.

hensive and practical

solutions catering to diverse requirements, ultimately enhancing overall user experience and satisfaction.

AEC Magazine: Vectorworks has been very advanced in working with intelligent reality capture tools, like Apple’s AR Toolkit. How well do you think adoption of this technology has gone and where do you think it will be going with rapidly evolving ‘meta’ tookits?

Biplab Sarkar: Vectorworks has been at the forefront of integrating advanced reality capture tools, including Apple’s AR Toolkit. Our commitment to leveraging these technologies has enhanced how our users interact with and visualise their projects.

The adoption of these tools has been quite promising. They have significantly improved data capture and integration accuracy and efficiency, providing our users with more immersive and precise project experiences. As a result, we’ve

Biplab Sarkar: Despite progress toward better data exchange and interoperability, openness remains a significant challenge in the industry. Although industry standards like IFC and BCF are becoming more common, achieving seamless integration with compromising data quality remains challenging.

Biplab Sarkar: toward better data exchange and interoperability, openness remains a significant challenge in the industry. Although industry standards like IFC and BCF are becoming more common, achieving seamless integration with challenging. still data interoperability. simplified

Many BIM platforms promote openness but can still restrict data exchange and favour their own ecosystems, leading to partial interoperability. This often results in data being simplified to the lowest common denominator, which can reduce the richness and detail of the information shared across platforms. For true interoperability, developers must fully embrace open data standards and interoperable APIs, allowing for more accurate and integrated workflows without losing critical data. The industry must move towards this level of openness to enable more efficient and effective multi-platform collaboration.

Vectorworks supports international openBIM standards, enabling seamless collaboration across BIM software products. It has been certified by buildingSMART International (bSI) for its IFC4 import capabilities, ensuring high-quality IFC models for accurate and credible work throughout the project lifecycle. This is crucial for multi-disciplinary projects using different software tools, as Vectorworks maintains data exchange standards, reducing errors and misunderstandings.

■ www.vectorworks.net

Pricing, licensing and business models

Martyn Day explores the rapid evolution in the way AEC software companies charge for licences and shepherd their users to boost

revenue

At AEC Magazine, we’ve lost track of the countless ways software companies have altered their pricing and licensing models over the nearly 20 years we’ve covered BIM.

What’s evident is that this pace of change is accelerating as developers continue to refine their business models, shifting from traditional per-unit pricing to a ‘service-based’ model.

It now appears that if one major player successfully implements a change— measured by increased revenue or seats sold— the rest of the market will sheepishly follow.

This hasn’t helped customers. Design IT directors manage tech stacks which typically comprise multiple products, from multiple vendors, across multiple sites and possibly multiple geos.

Licence model changes are typically made to increase the revenue of software firms. This increases the cost of ownership and can also increase the time it takes to manage those licences. In short, it drains hard pressed budgets. Operating the same product, but on multiple different licences, increases the chance of being fined under licence compliance audits.

When a firm consistently adds new seats each year, the likelihood of differences in

End User Licence Agreements (EULAs) increases, especially as software companies operate multi-year licensing but are rapidly evolving their licence models.

One might expect vendors are offering diverse licensing options to accommodate the wide variety of business needs, but subscription models in our industry remain surprisingly inflexible.

Business evolution

The design technology market is what one would call a mature software industry. In the 1980s, firms like Autodesk and VersaCAD pushed architects to trade in their drawing boards for desktop CAD. Today, it’s safe to say that professional design firms now rely on CAD systems far more than paper.

Over time, the opportunity for new sales has diminished, leading software firms to focus on getting existing customers to purchase more. In business parlance this means that the CAD market is highly saturated and highly penetrated.

This saturation often results in diversification, with vendors developing additional products to expand their customer base. Their sales departments become very concerned about ‘attachment rates’.

Companies like Bentley Systems and Autodesk, once known for a single prod-

uct, have now expanded to offer hundreds of solutions across various vertical markets.

Occasionally, a new generation of technology emerges, allowing companies to replace outdated systems. These shifts are rare, occurring roughly every 10 to 15 years.

Historically, such transitions have aligned with major operating system changes, like the shift from Unix to DOS or DOS to Windows. However, during these periods of transformation, it’s never certain that the dominant applications or leading software firms will retain their top position post-transition.

Perpetual to subs

For about 30 years, software firms sold perpetual licences, granting users never ending access to a particular version of the design software. Depending on the development cycle, every few years the vendor would go back to try and upsell the customer to the next release by hopefully providing features deemed worthy of the upgrade price. This was always a hard slog for software firms and cost a lot in marketing and sales. Maintenance fees became common to naturally progress the sales cycle. Still, many customers would not upgrade every release,

typically upgrading every three years. It was possible to be an Autodesk customer that hadn’t spent any money with Autodesk for years.

As software vendors diversified and created numerous applications for design management and creation, the opportunity for bundling emerged. Companies like Corel, with CorelDRAW, pioneered the idea—appealing to users’ desire for perceived value by offering multiple applications at a discounted rate. Autodesk followed suit in 2012, capitalising on vertical markets by bundling popular products like AutoCAD, Revit, and Navisworks.

In 2013 Adobe then championed subscription and moved its Creative Suite product delivery to the cloud. Subscription replaced perpetual licences and the cost of ownership per licence went up.

Pretty much all software firms learnt from Adobe’s experience. In the CAD space Autodesk was first to manage the migration.

In 2016, Autodesk introduced subscription and transferred Suites into Collections as an alternative to perpetual licences. By 2020 Autodesk wanted to stop perpetual sales and convert customers and gave customers a number of years to migrate. At the same time, it culled network licences and moved to named user sign on, mandating a seat per user.

While this was shocking, Autodesk did offer a killer deal, offering two subscription licences for every single perpetual licence handed in, for a period of 8 years. Depending on when firms moved, some

The last five years

will need to start paying for these extra licences from 2028 onwards (in four years’ time). It’s likely that the design IT manager who signed up for the deal, is no longer at the same company.

Today, we have a world of connected cloud and mobile apps combined with named user subscriptions, giving vendors direct access to customer usage and contact details, which were previously obscured by reseller channels. This direct link also opened-up new possibilities for increasingly direct sales models, services and usage billing. Procore, for example, has championed ‘per project’ fees.

As software firms have further refined their cloud and subscription packages, many historical discounts have been reduced or removed.

In response, some customers have adjusted their strategies to fit their budgets, downsizing certain products and replacing applications from developers perceived to be price gouging. Of course, this approach only works when viable alternatives are available.

We’ve come across several instances where software vendors have raised prices by 100-300%, making it difficult for existing customers to avoid steep hikes or forced product migrations. Vendors often pressure clients by threatening to shut down hosted servers for ‘legacy’ software, forcing them to accept new terms or face repurchasing all software at full market price.

Pricing and licensing models over the years

NXTAEC.com

As a follow on to NXT BLD and NXT DEV 2024, our AEC technology conferences, we have launched NXTAEC.com, a dedicated video website where you can view all the talks from our recent events, including the incredibly honest Pricing, licensing and business models panel discussion (see box out on page 42)

NXTAEC.com is free. You just need to register. By implementing a registration system, we can offer capabilities like ‘watch later’, ratings, comments, playlists, share and dedicated video search with tags. Over time we plan to add more community features and facilitate ways to keep in touch with people you meet at NXT BLD and NXT DEV.

For now, we have uploaded content from the last three years of NXT BLD and the the last two years of NXT DEV. We will spend the Autumn backloading all the previous NXT BLD talks.

We want to keep the NXT BLD and NXT DEV vibe going throughout the year, so we also plan to use the site to produce and feature demonstrations from the really innovative AEC startups which we discover from our day job of researching the market for our bi-monthly AEC Magazine .

If you have not done so already, please register now. ■ www.nxtaec.com

Design IT directors already juggle billable hours alongside IT management tasks such as upgrades, cross-grades, and user onboarding and offboarding. When vendors go rogue, the extra time needed for evaluation, transition, and migration to a replacement product becomes an unwelcome burden.

Cloud integration

The focus of several vendors has shifted to integrating and delivering services via SaaS. Procore, Autodesk, Hexagon, Trimble have all expanded and embellished the cloud component of their business. New features are increasingly being added, as opposed to delivered in the design software.

We’re moving toward a world where pure-play desktop applications are becoming rare, with users relying on vendors for hosting, applications, collaboration, and business functions. This is already producing some anxiety in customers who fear being trapped in proprietary cloud with limited API access. Perhaps sensing this distrust, most vendors are now talking about how ‘open’ they are, should that be file access, programming, or Application Programming Interface (API).

Artificial Intelligence

Although current AEC software features limited artificial intelligence, AI is poised to make a significant impact when applied at scale, potentially transforming the industry with advanced expert systems.

For the next year or so we will probably see small features and tasks benefiting from AI with more in-depth applications after that. AI-driven expert systems promise significant productivity benefits, posing a challenge to existing software business models. In 10 years’ time, one wonders how many seats of full BIM will be required to manually hand craft detailed schematic design models and generate documentation. With less software and more service, business models will change.

Conclusion

The need of software firms to maintain growth in a highly saturated and penetrated market has led to rapid ‘innovation’ in business models which outstrips the budgets and perceived value provided to customers. Design IT directors are having to juggle budgets to provide teams with the tools they need.

After years of feedback and complaints, it was a logical choice at this year’s NXT DEV conference to host a

panel discussion on how software pricing, licensing, and business models affect firms (see box out below)

The frustration levels are high and with price negotiation seemingly being automated out of the process, there seems to be a valid discussion to be had over value. If a software tool is biased towards conceptual design, it might be heavily used for three months out of a three year project, but it’s priced as if it’s used eight hours per day, every week of every year.

At AEC Magazine, we’re already contemplating the future beyond individual software subscriptions. We’re beginning to see firms experiment with project value-based fees. With the advent of AI, auto-drawings, automation, and expert systems, the potential to significantly reduce the amount of software, labour, and time needed could drive productivity beyond anything we have seen before.

As the industry remains focused on recurring seat licence volumes, the future may demand a hybrid business model combining software access with transactional services and monitored CPU/GPU usage. This shift could necessitate software companies gaining deeper insights into project or customer finances—a prospect that is unlikely to go down too well.

The voice of the customer: NXT DEV panel discussion

After hearing numerous complaints from design IT directors about the constantly changing business models of software companies, we decided to dedicate a panel session to this topic at our recent NXT DEV conference. The goal was to offer critical feedback to current software and service providers in the AEC market, while also providing key insights to start-ups developing their pricing and licensing strategies.

The panel session was moderated by Richard Harpham, formerly of Revit, then Autodesk, and now co-founder of Skema. The panel included Iain Godwin, ICT Consultant to many AEC practices and former IT director / senior partner at Foster + Partners, Jens Majdal Kaarsholm, director of design technology at BIG (Bjarke Ingels Group), Alain Waha, CTO Buro Happold and Andy Watts, head of design technology at Grimshaw.

Many presentations at NXT DEV highlighted a growing concern among AEC firms about their data being locked in the cloud servers of software vendors, compounded by fears of

restricted access through proprietary APIs. This pointed to a clear trust gap between vendors and their customers.

The panel members are responsible for their company’s design software estate, and explained how they are constantly evaluating value vs cost. They expressed how the felt ‘pushed into a corner as to how we will pay for technology’ by vendors.

Waha expressed concern about vendors engaging in ‘rent-seeking,’ aiming to extract money for shareholders without adding value.

“They have identified us as sheep and we cannot defend against a large dominating organisation who will be rewarded and incentivised by capital at scale.” He then highlighted the irony that while these funds prioritise shareholder profits over technological advancements in the built environment, people, including himself, also benefit from strong pension returns.

Harpham played devil’s advocate. He said, “Vendors are not trying to do physical harm to the customers but there are business goals that make you change the way you work. You only have so many levers you can pull —

you can ask a customer to pay more for what they have and right now that’s running about 5% per year. You can change the metrics and that ‘s what happened when we moved to subscription. You can re-package and that’s Suites/ Collections and then there’s licence compliance.”

Godwin, who was instrumental in the first Open Letter to Autodesk (www.tinyurl.com/autodesk-letter), stated that customers required trust in relationships with vendors. There

were many aspects of the relationship with Autodesk that were schizophrenic. The Revit development team was trying to do its best, but the monetisation process and compliancy model undermines the trust and doesn’t help firms feel like they are getting value out of that relationship. And that was just the first five minutes!

We think this panel session is essential viewing. Tune in to hear what else was said @ www.nxtaec.com

Activist investor pursues Autodesk

For the second time in ten years, Autodesk is dealing with an activist investor problem. Will Starboard Value get its way, and if it does, what might be the knock-on effect for customers? Martyn Day reports

Among the trials and tribulations faced by executives running publicly listed companies, the difficulty of simultaneously satisfying the holy trinity of customers, board members and shareholders looms large. The rough and tumble of market competition may be fierce, but there’s plenty of potential for internal drama, too.

The insider currently waging war against Autodesk’s executive team is Starboard Value, an investment firm that has bought up some $500 million worth of Autodesk stock (about 1% of shares).

comprehensive report explaining how this could be achieved.

Autodesk CEO Andrew Anagnost clearly has a fight on his hands, because Starboard Value has a good track record of getting what it wants.

In April 2024, having delayed the filing of its annual report (Form 10K) with

strength and executive compensation.

In May 2024, Autodesk announced that it did not need to restate or adjust its audited and unaudited filings, but its chief financial officer was swapped out.

‘‘ While customers may not be happy with Autodesk’s approach to business, under a Starboard Value-appointed CEO and weighted board, I can imagine things getting a lot worse

The company is now seeking to make changes to Autodesk’s management, operations and financial accounting strategy, in order to make it more profitable and thus drive up the share price. It has written a press release, letters to the board and fellow shareholders, and a

the Securities and Exchange Commission (SEC), Autodesk launched an internal investigation into accounting practices relating to multi-year enterprise subscription deals and how the company calculated free cash flow, a key metric used to evaluate financial

A recurring problem for Autodesk?

This is not the first time that Autodesk has had to deal with an activist investor. Between 2015 and 2017, Sachem Head and Eminence Capital acquired 11.5% of Autodesk’s stock and went to war with the company over operating margins and conservative long-range predictions. They eventually managed to penetrate the company board to occupy three seats there. While on the board, they drove mass layoffs at the company to increase operating margin and ultimately got CEO Carl Bass to agree to step down – but on the proviso that, in turn, the activists left the board

after the new CEO was elected.

Andrew Anagnost was chosen as the new CEO after a period of ‘co-CEOing’ alongside VP of product development, Amar Hanspal. This seemed logical, since Anagnost was the brains behind product bundling (suites) and driving the business shift to subscriptions to increase revenue from the base, following the Adobe model.

From what I could tell, he had been the star of Autodesk’s investor days, explaining the forthcoming change to the business model and giving investors that warm and fuzzy feeling associated with a promise

The way this probe was handled is what seems to have instigated Starboard’s aggressive intervention and its claims that Autodesk executives had misrepresented financial results and deliberately kept shareholders in the dark. In Autodesk’s defence, one wonders if the company’s rapidly evolving subscription licensing model (see page 40) might have made recognising revenue from multiple payment contracts and timings, harder to track.

At the time of writing, Autodesk has managed to keep Starboard Value representatives off the company board via a court case. In June, the Delaware court

to squeeze the lemon harder.

During Carl Bass’s tenure as CEO, he had overseen a revolution in software development within the company, starting products like mechanical CAD (MCAD) tool Fusion, in order to attempt to go after Solidworks and many other cloud and mobile apps. There had also been huge growth in acquisitions, hiring, investment in the maker revolution

and expansion with workshop spaces like the Autodesk Technology Centre in San Francisco and Autodesk AEC space in Boston, complete with industrial robots. The company was still profitable, and the share price was rising, but activist

playbook of metrics and, at the time, Anagnost convinced them he was the man

investors had their for the job.

Carl Bass during his tenure as Autodesk CEO

ruled that Autodesk didn’t need to reopen the window to nominate directorial positions for a contested election at the July annual shareholder meeting.

Because Starboard Value missed this nomination window, it will have to wait another year for the next shareholder meeting to try and infiltrate the board, a timeline that industry watchers have said will not matter to the determined Starboard raiders. Meanwhile, they are metaphorically camped outside the Autodesk walls, occasionally lobbing in grenades, attempting to highlight changes it feels should be made to the company, both to other shareholders and to the market in general. Their efforts so far have generated additional litigation on some of the points it has raised. As we go to press, there’s a new potential class action suit from Shareholders Foundation Inc, seeking shareholders who bought Autodesk stock prior to June 2023 and were impacted by the share dip caused by the delay of filing the 10K. Autodesk is attracting the ambulance chasers.

Investor unrest

Readers who have been following Autodesk corporate politics closely and, in particular, the recent open letters sent by European and Australian AEC Autodesk customers might hope that this investor unrest and historic customer anger might be linked. However, they are not.

In fact, the situation is quite the opposite. The open letters from customers were borne out of frustration regarding a perceived lack of delivered value in product development, with price rises and

compliance penalties for minor EULA infringements.

Starboard Value’s letters to the Autodesk board and subsequent legal action are based around claims that operating margins are weak, management compensation needs overhauling and that the company is guilty of mismanagement, shareholder misdirection and poor share price performance.

In the widely distributed presentation, Starboard rates Autodesk as an attractive business, because customers face a high cost of switching vendors. In Starboard Value’s words, customers have Autodesk products “entrenched in their workflows”, which gives the company favourable dynamics for “sustained pricing power”. This has led to a “highly recurring subscription revenue base” (as in, yes, you are trapped) and “growth potential for adjacencies, such as Construction Cloud”.

Starboard Value’s team are corporate raiders. They like the Autodesk business but believe it could be run more profitably with better cash flow, reduced discounts and by charging higher prices for a trapped global customer base. They don’t care what Autodesk does; what its customers do; what software advances it makes. Autodesk is a cash-generation engine, and Starboard Value wishes to profit from the delta in the share price increase, by implementing new management and making a series of business goal changes.

Along the way, the activist investor is making a series of demands. It wants to reevaluate CEO Anagnost. It wants to right-size the cost structure (by reducing overhead costs) and fix budgeting disci-

pline (by spending less). It wants to overhaul the compensation granted to executives, switching away from yearly rewards to longer term targets. It wants to improve capital allocation (by using money to buy back shares to increase the price). Starboard Value claims Autodesk could achieve a 10% adjustment to operating margin, with benefits to cash flow.

Wall Street analyst Jay Vleeschhouwer, managing director of Griffin Securities and probably the global expert on performance CAD and design industry firms, has described Starboard’s presentation deck as: “A mix of factual statements (about share price performance and achievement of targets given at various Autodesk investor days); peer comparisons on various metrics; allegations (about board oversight and management performance); proposed financial targets; and, in our view, important omissions (including lack of competitive and structural context).”

Vleeschhouwer went on to say that while he believes Starboard Value has made some valid points, the overall complaint, in the main, is not “sufficiently convincing”.

(N.B Vleeschhouwer spoke at NXT DEV 2024 on the evolving business models in our industry. Watch his presentation at www.nxtaec.com)

The changes that Starboard is seeking would likely increase street prices. There would be mass layoffs and cost reductions at Autodesk, in order to drive up operating margin. Everything would be done to increase cash flow and predictability, possibly impacting R&D budgets.

Recent changes to reseller contracts, rolling out over the world, swapping payment direct to Autodesk, will surely deliver that in spades, but that will not be complete for a year or so.

From Starboard Value‘s deck, the plight of customers being stuck in Autodesk workflows would leave them open to exploitation and could lead to more price gouging. While customers may not be happy with Autodesk’s approach to business, under a Starboard Valueappointed CEO and weighted board, I can imagine things getting a lot worse.

Andrew Anagnost, Autodesk CEO, in the cross hairs of Starboard Value
A slide from Starboard Value’s Autodesk report

Industry metrics

Boardroom battles are not a topic that AEC Magazine would usually cover, but big software firms in leading positions do inevitably attract the attention of this brand of activist investors who make money from doggedly pursuing stocks they think they can boost.

Autodesk is the market leader serving our readership, and CEOs can easily attract the wrong type of attention by missing industry metrics and seemingly underperforming expected stock price performance.

For CEOs, executives and board members, there are real consequences if these activist investors succeed, as well as for employees. I guess these are the rules of the financial jungle and, just as they did with the Autodesk CEO before (Carl Bass –see box out on previous page), these activist investors have proven they can spend years waiting for their moment.

VAR money direct, it is amazing that shareholders can be upset. However, it is true that this hasn’t necessarily been reflected in the share price post Covid.

‘‘
I can’t see what else Anagnost could have done to increase money from customers any quicker. Autodesk has gone from $2 billion to over $6 billion under Anagnost in seven years

From a pure business perspective, having watched Anagnost iterate through business models, going from individual product sales to suite bundles, desktop to cloud, network licences to mandating named user, squeezing out distribution, squashing Value-Added Reseller margins, then taking all the

As for turning around business models, companies are like oil tankers, it takes years. But I can’t see what else he could have done to increase money from customers any quicker. Autodesk has gone from $2 billion to over $6 billion under Anagnost in seven years. This results in pretty much a quarter-on-quarter chart with a steady 45-degree line of increase.

Anagnost began as the darling of the market, having successfully transitioned Autodesk to subscription payment with associated downstream boosts in revenue and a rocketing share price. The market judgement has changed once the installed based has been converted to subscription. I expect executives and members of the board hope that they have a year to ‘bed in’ the changes they have already implemented in the business model. Taking the whole payments for every deal will increase cash flow dramatically. It’s possible to lay off staff too. If operating margins improve and the share price ascends, Starboard Value

wins with either outcome. On that basis, the question will be whether Starboard Value deems the operating margin is high enough to leave Autodesk alone.

From my point of view, which is a customer perspective, this is all a very lopsided value equation. Customers have little power. It’s one of the reasons the UK and Nordic Open Letters to Autodesk opted also to email the letter to Wall Street analysts, as they believed shareholders had more sway on the executive team than customers. With subscription, the right for customers to withhold revenue (not upgrade) for lack of valuable new development has vanished. You could complain to your account manager, but you still have to keep paying to use the stuff, as there are project deadlines to meet. Network licences have gone and licences for individual users are mandatory. Now historic discounts will also vanish. The only way to show displeasure is to migrate to another design tool provider, but as Starboard Value recognises, that is currently a tough call and means swimming against the industry tide.

Perhaps the best that can be achieved is for firms to actively address the issue by maintaining a more balanced mix of tools, while keeping alert for future lock-ins from pretty much all software developers.

In doing the research for this article, it’s clear to me that the key connecting link between customers, investors and Autodesk is via the company’s regular business model and licensing manipulations to optimise increasing revenue in such a way as to also meet its cash flow goals, a key metric which will reward the company’s value, share price and also the executives’ compensation.

As long as there is a stock market, shareholders and greed, investors will always come first. And if they feel they don’t, they will take direct action.

Wall Street analyst Jay Vleeschhouwer believes that while Starboard Value has made some valid points, the overall complaint, in the main, is not “sufficiently convincing”. Vleeschhouwer spoke at NXT DEV 2024 on evolving business models in our industry. You can catch his presentation at www.nxtaec.com

A slide from Starboard Value’s Autodesk report

Autodesk Content Catalog

Eighteen months on from its acquisition by Autodesk, Unifi’s cloudbased software solution for managing and accessing design content has been reworked and integrated into Autodesk’s cloud stack, writes Martyn Day

For mature BIM customers, having content at the tips of their fingers can lend productivity a major boost. It might be content created for previous projects that they need, or new content created for ongoing projects that needs to be shared among teams. Managing this kind of content can be complicated, however, and there have been numerous technologically-based attempts to tackle the issue.

The big issue here is that the Internet was, and still is, a Wild West when it comes to downloadable content. Customers looking for BIM component data that doesn’t already exist in their own internal, managed repositories are forced to deal with issues around file size and quality and then incorporate these ‘foreign objects’ into well-managed BIM processes.

It’s been a challenge for the software industry. Take, for example, Autodesk Seek, a content website from the software giant that demonstrated exactly how disparate the quality of downloadable content can be. In early 2017, Autodesk ended up handing over the operations and customer support obligations relating to Autodesk Seek to BIMobject (www.bimobject.com), a Sweden-based company that has taken on the gnarly task of encouraging AEC manufacturers to provide managed content and developed a high-end database to store it, at first for its own use and then

later, for fee-paying BIM customers. In the UK, meanwhile, we have BIMstore (www.bimstore.co), among many others.

In my view, getting companies in the AEC industry to provide up-to-date, high-quality, modern digital deliverables that represent their entire product ranges is probably never going to happen. The task is too huge, and I think we may have to wait for AI to take it on.

Welcome news

That said, the recent announcement that Autodesk has reworked the technology acquired in its March 2023 acquisition of Unifi and integrated it into its own cloud stack is welcome news.

Unifi was founded in Last Vegas in 2010 by engineers Dwayne Miller and Ken Gardener, in response to the huge expansion of building programmes in that city over the past two decades, as well as in the Middle East and Asia. The goal was to give back to BIM users all the time they wasted searching for and downloading content.

In essence, Unifi was built as a library for collating and managing company and project content, providing control of virtual assets for firms looking to deploy consistent standards across project teams. Offering cloud accessibility, inRevit access, intelligent search and browse, plus a stack of other management tools, Unifi gained real momentum quickly and inevitably popped up early on Autodesk’s radar. Getting into buying mode and negotiating a deal took time, but it was clear that Unifi was a product from which all Autodesk customers could benefit and which could easily be included in their subscription fees.

Since the deal was signed, Unifi has been reworked to fit into Autodesk’s cloud stack and rebranded as Content Catalog, a part of the Autodesk Docs subscription at no extra

cost and manageable via the Autodesk Construction Cloud admin. This means that users of numerous Autodesk products have free access to Content Catalog (including ACC, AEC Collection, BIM Collaborate, Collaborate Pro, Autodesk Build, Autodesk Takeoff), as well as those with an Autodesk Docs stand-alone subscription.

Meanwhile, customers using the most recent release of Unifi Pro are secure and Autodesk has no plans to retire this product. In fact, Content Catalog doesn’t offer the full functionality of Unifi Pro, with a number of key features omitted. These include content ratings, content requests, personal saved searches and support for Revit legends, Revit Material and Fill Pattern. Also missing are the preview image generator, automatic users management for group syncing with SSO or Active Directory, the ability to create new shared libraries, manufacturer-provided content (channels), shared parameter management, APIs and Project Analytics.

It’s expected that many of these capabilities will be added over time. Project Analytics, for example, appears to be something that Autodesk is working on in a more general capacity within its cloud stack and the company plans a release of new management tools, with a separate licensing framework.

In conclusion, Unifi has the potential to be a big crowd-pleaser, especially with customers that have not yet implemented a content management system of their own. And in many ways, content management within Autodesk’s BIM products is long overdue, as are industrial-strength management-level tools. The inclusion of Content Catalog in the Autodesk stack and the possibility that the company is working on additional management reporting tools is likely to be well-received.

■ www.autodesk.com

How to choose a remoting protocol

Adam Jull from IMSCAD highlights some of the vendors and protocols that are key to providing an end-to-end solution for performative remote workstation deployments

In the AEC sector, the benefits of having remote workstations or desktops on-premise or in the public cloud are well known: increased security, centralised IT management and mobility — meaning users can access their workstation from anywhere, on almost any device.

The other key benefit is AEC firms move away from downloading project data, working on it locally, and then uploading it back to central. All data remains on the host, which saves time, protects IP, and minimises sync errors.

Choosing a remote working solution, especially for graphically demanding applications such as Autodesk Revit, Bentley Systems MicroStation, Adobe Creative Cloud, Enscape, Lumion and others can be tricky, especially as most firms have different user requirements and mix of applications.

Computer hardware is only one part of

a dedicated remote working solution. On the software side, there’s the hypervisor / virtualisation software, connection broker and remoting protocols, the subject of this article.

Remoting protocols support communication between the remote workstation and client device by transmitting display output, keyboard input, mouse movements, and other data.

Compared to basic RDP access through a VPN, most remoting protocols offer a good experience with graphically accelerated workloads and 3D models, widely used in AEC.

Below we highlight some of the key vendors and protocols that we feel are key to providing a performative end-toend solution.

There’s certainly no one size fits all, so look at your application mix and resources and you may find more than one of these could work for you.

VMWare Blast Extreme – Developed by VMware and introduced in Horizon 7, Blast extreme offers a secure high-quality user experience over high latency networks. With efficient use of host server resource, it’s an excellent choice for remoting with GPUs.

Our score 8/10

HP Anyware – Based on Teradici PCoIP, HP Anyware delivers a secure, pixel perfect remote session to the user and dynamically adapts to changing network conditions, HP Anyware is a flexible solution which can used across multiple hosting platforms and has a broker and access gateway, meaning it’s an all-in-one solution.

Our score 8/10

Mechdyne TGX – Delivers high resolution, fast frame rate colour-accurate remote sessions. It can be utilised across multi hosting platforms and can be paired

Lenovo (left) and HP (right) both offer micro workstations that can be rack mounted for remote access

Workstations

with Leo Stream to provide brokering and gateway functionality.

Our score 8/10

Citrix HDX – Built on the Citrix ICA protocol, the Citrix HDX technologies suite provides a secure high-definition user experience even over challenging network conditions. With redirection capabilities and wide-ranging support for endpoints and peripherals, Citrix is our favourite and a great choice in most GPU deployment scenarios.

Our score 9/10

An honourable mention also goes to both Parsec and Splashtop.

A balancing act

Choosing a remoting protocol is a balancing act and there are many factors to consider, which we outline below.

Performance

Ensuring users have a good experience is arguably the most important consideration.

Most of the protocols highlighted boast a similar “local desktop-like user experience”. However, achieving this requires a combination of built in technologies that intelligently optimise the end user experience in real time, with network traffic optimisation, server or endpoint redirection and GPU and CPU offloading.

There are also optimisations and performance tuning that platform administrators can adjust to enhance and refine the user experience. Many solutions provide customisable policies which can be applied locally or via GPO (preferred) to tune, refine and optimise the end user experience with customisable policies for audio media and graphics.

Codecs, which are used to encode and decode video streams, can also play a part. Newer releases can not only provide a richer high-definition user experience but also bandwidth savings. H.264 (AVC) is a commonly used codec with broad hardware and software support. Below we highlight the protocols that can support these Codecs.

It’s important to note that H.265 and AV1 are typically not enabled by default and can require a GPU to be present at both the host and user end point, with feature parity to use.

How well a remoting protocol handles low bandwidth, high latency connections is also an important factor and can be something that requires additional configuration and tuning.

Most of the protocols mentioned will refer to adaptive transport to deliver an optimised intuitive desktop like experience to the end user.

In essence, it is a means of leveraging multiple transport protocols and switching between them when conditions dictate. For the best user experience, we recommend utilising this.

In its very basic form you might have two transport protocols — UDP and TCP. UDP is preferred as it’s generally recognised to be best and faster with improved data throughput over longer distances. However, when things get a bit choppy you might want to fallback to TCP, which is arguably not as fast, but super reliable. That’s what adaptive transport does.

Below is an overview of protocols that support adaptive transport.

Multiple monitors

Muti monitor support is also important, and the number of monitors and resolution can vary depending on hardware, resource allocation, driver and OS versions. Below we outline typical multi monitor support for the highlighted protocols.

Client access

Finally, it’s important to consider how the remote workstation will be accessed from the client device. Most of the remote protocols offer a range of client, mobile app or HTML option to suit. Below, we highlight the accessibility options.

Peripherals

It’s important that the protocol supports a broad range of peripherals typically used in AEC workflows, such as the SpaceMouse, Wacom tablet or VR headset.

Below we provide a summary of common devices (other than your standard webcam, audio, keyboard and mouse) that have general support and can be ‘passed through’ or configured via USB redirection in a remote session.

Summary

All remote protocols for accessing desktops / workstations remotely have their pros and cons. All will run on your existing on-premise workstations, from the cloud, or via rackable workstations in a co-located data centre, such as those offered by HP or Lenovo. At IMSCAD we offer all different remoting protocols and encourage our customers to try them out. ■ www.imscadservices.com

Citrix
(Enlightened Data Transport)

Unlock Ray Tracing and AI acceleration

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.