Building Information Modelling (BIM) technology for Architecture, Engineering and Construction


Leading AEC software developers share their BIM 2.0 observations and projections




Building Information Modelling (BIM) technology for Architecture, Engineering and Construction
Leading AEC software developers share their BIM 2.0 observations and projections
Building Information Modelling (BIM) technology for Architecture, Engineering and Construction
editorial
MANAGING EDITOR
GREG CORKE greg@x3dmedia.com
CONSULTING EDITOR
MARTYN DAY martyn@x3dmedia.com
CONSULTING EDITOR
STEPHEN HOLMES stephen@x3dmedia.com
advertising
GROUP MEDIA DIRECTOR
TONY BAKSH tony@x3dmedia.com
ADVERTISING MANAGER
STEVE KING steve@x3dmedia.com
U.S. SALES & MARKETING DIRECTOR
DENISE GREAVES denise@x3dmedia.com
subscriptions MANAGER
ALAN CLEVELAND alan@x3dmedia.com
accounts
CHARLOTTE TAIBI charlotte@x3dmedia.com
FINANCIAL CONTROLLER SAMANTHA TODESCATO-RUTLAND sam@chalfen.com
AEC Magazine is available FREE to qualifying individuals. To ensure you receive your regular copy please register online at www.aecmag.com
about
AEC Magazine is published bi-monthly by X3DMedia Ltd 19 Leyden Street London, E1 7LE UK
T. +44 (0)20 3355 7310
F. +44 (0)20 3355 7319
© 2025 X3DMedia Ltd
All rights reserved. Reproduction in whole or part without prior permission from the publisher is prohibited. All trademarks acknowledged. Opinions expressed in articles are those of the author and not of X3DMedia. X3DMedia cannot accept responsibility for errors in
Register your details to ensure you get a regular copy register.aecmag.com
Solibri acquires Xinaps, Xyicon extends reach of Revit, Spacio gets AI-powered façade design, and Snaptrude boosts Archicad interoperability, plus lots more
Nvidia DLSS 4.0 leans on AI for real-time 3D performance, AI Assistant for Archicad enters preview, Tektome liberates ‘dark data’, plus lots more
We ask Greg Schleusner, director of design technology at HOK for his thoughts on the AI opportunity
As we move into 2025, we ask several leading AEC software developers to share their observations and projections for BIM 2.0 and beyond
Hypar co-founder Ian Keough gives us the inside track as his cloud-based design tool puts the spotlight on space planning
On 11 - 12 June, our annual NXT BLD and NXT DEV conferences will bring together the AEC industry to help drive next generation workflows and tools
Snaptrude is working on enhanced interoperability between its web-based BIM authoring tool and Nemetschek Group BIM solutions, including Graphisoft Archicad, Allplan, and Vectorworks.
The aim is to enable architects to more easily transition between a range of BIM tools, harnessing the strengths of each tool at different project stages.
Interoperability with Nemetschek Group software will start with the ability to export Snaptrude projects into Archicad, ‘preserving all the parametric properties’ of BIM elements. Project teams on Snaptrude have shared workspaces that also include a centrally managed
library of standard doors, windows, and staircases. Upon import, Snaptrude objects will be automatically converted into editable families in Archicad.
In the future, the integration will extend to a bi-directional link between Snaptrude and Archicad for synchronisation of model data and changes. According to Snaptrude, this will further enhance collaboration and efficiency in the design process, as users will be able to switch back and forth between the programs.
Snaptrude already offers bi-directional support for Autodesk Revit, a workflow that we explore in this AEC Magazine article - www.tinyurl.com/Snap-Revit
■ www.snaptrude.com
Spacio has added an AI-powered façade generation feature to its building design software.
The new feature is designed to significantly speed up early-stage façade design, offering ‘curated façade presets’ that ‘intelligently adapt’ to a building’s geometry. It also integrates with both manual and generative design workflows, and provides ‘instant visualisation’ of different architectural expressions.
“We’re not replacing creativity – we’re amplifying it,” said Spacio co-founder André Agi, whose software combines sketching with ‘instant 3D models’, and real-time analysis. “These tools let architects focus on what truly matters: creating exceptional spaces.”
Spacio is also launching a freemium version of its software and is building a community on Discord.
■ www.spacio.ai
Ray 7 for SketchUp and V-Ray 7 Rhino, the latest releases of the photorealistic rendering plug-ins, include support for 3D Gaussian Splats, a technology that enables the rapid creation of complex 3D environments from photos or video.
VBy adding native support in V-Ray, SketchUp and Rhino users can now place buildings in context or render rich, detailed environments that can be reflected and accept shadows.
V-Ray 7 also includes new features for creating interactive virtual tours — immersive, panoramic experiences that can be customised with floor plans, personalised hotspots and other contextual details that will highlight a space’s attributes.
There have also been several improvements to V-Ray GPU rendering, inclusion support for caustics and the ability to use system RAM for textures, to free up GPU memory.
■ www.chaos.com
Elecosoft, a specialist in building lifecycle software, has announced its strategic merger with Pemac, a provider of cloud-based computerised maintenance management software (CMMS).
According to Elecosoft, the move will enhance its ability to deliver asset management solutions across manufacturing, life sciences, and healthcare, with a focus on compliance and operational efficiency. With offices in Cork and Dublin, Pemac will also expand Elecosoft’s presence in the Irish market.
■ www.elecosoft.com
Bentley Systems has appointed James Lee as chief operating officer (COO). Lee transitions from Google, where he held the position of general manager overseeing startups and artificial intelligence (AI) operations at Google Cloud ■ www.bentley.com
German tech investor, Maguar Capital, has acquired a majority investment in hsbcad, a specialist in offsite timber construction software. Hilde Sevens, who held previous roles at Nemetschek SCIA, Autodesk and Siemens PLM, will take over the role as CEO ■ www.hsbcad.com
Construction tech startup Automated Architecture (AUAR) has been awarded a Smart Grant of £341K by Innovate UK. The grant is will help AUAR scale up its building system and micro-factory platform to manufacture mid-rise timber housing, up to six storeys ■ www.auar.io
Frilo 2025, the latest release of the structural analysis software, includes a direct interface to Allplan BIM, a new PLUS program SLS+ for the design of splice connections, and the option of designing one and two-sided transverse joints to timber beams ■ www.frilo.eu/en
Hexagon’s Asset Lifecycle Intelligence (ALI) division has acquired CAD Service, a developer of advanced visualisation tools used to integrate CAD drawings, BIM models, and reality capture data into HxGN EAM, Hexagon’s asset management solution ■ www.hexagon.com
AEC and CAD solution service provider Mervisoft GmbH and software development company AMC Bridge have formed a strategic partnership to expand the range of development services offered to AEC firms in the DACH region (Germany, Austria, and Switzerland) ■ mervisoft.de ■ amcbridge.com
olibri, part of the Nemetschek Group, has acquired Xinaps, a specialist in BIM model QA software. Solibri has also announced Solibri CheckPoint, a new cloud-based model checking solution that connects directly to Autodesk Construction Cloud (ACC), BIM 360, and Procore.
Solibri CheckPoint is designed to help AEC firms ensure that their BIM projects, consisting of native Revit and IFC files, comply with ‘robust standards’.
The software includes customisable rules for model checking. Users can run clash detection, data validation, and free
■ www.solibri.com/checkpoint S
space checks, to find issues early in the design phase, then assign, and track issues directly within the platform.
For data validation, ‘Property check’ allows users to check the quality of model data, verifying that model elements meet required data standards for accuracy, consistency, and compliance.
Free Space checks allows users to check for clearance around critical elements to prevent obstructions and maintain functional, safe spaces, ensuring that target elements maintain the required spaces, dimensions, and alignments.
yicon has updated the Revit addin for its information modelling platform which is designed to give non-AEC professionals, such as building owners and project managers, real-time access to data embedded within Revit RVT files.
Xyicon’s information modelling platform, often used for planning and operations, centralises both graphical and non-graphical data into integrated 2D/3D models. The Revit add-in is designed to address the disconnect between AEC professionals and non-AEC project teams, by allowing anyone to work in a functional BIM environment.
BIM tools like Revit, as Xyicon explains, are built exclusively for AEC professionals, often leaving non-AEC stakeholders to rely
on traditional methods like PDF based diagrams and spreadsheets. These manual workflows are said to limit collaboration, especially for those without modelling expertise or access to BIM software.
With Xyicon’s Revit add-in, any user can view and update the Revit BIM model directly and contribute to its progress. For example, through the Xyicon platform, users can lay out new or additional furniture, assets and equipment, move placements, rotate positions, view and edit parameters, or delete assets. At the same time, AEC professionals retain full control over what gets synced back to Revit. According to the developers, this ensures alignment and accuracy in the final model.
■ www.xyicon.com
Tektome is a new AI platform for processing, automation, and quality checking of architectural design data through natural language.
“Dark data” can be analysed across various formats — including CAD, BIM, PDF, Word, and Excel — automatically identifying, structuring, and extracting insights
■ www.tektome.com
BIMlogiq Copilot, an AI-powered tool for automating tasks in Revit, now features an enhanced code generation model, several new public commands available to all users, and the ability to share saved commands with others
■ www.bimlogiq.com
Nvidia is making it easier to build AI agents and creative workflows on workstations and PCs. The company’s new AI foundation models — neural networks trained on huge amounts of raw data — are optimised for performance on Nvidia GPUs ■ www.nvidia.com
Rose is a new Revit plug-in designed to streamline BIM classification. The software uses a multimodal AI model to evaluate Revit families and family instances, then updates them with a normalised name parameter. Rose uses computer vision to understand elements ■ www.ulama.tech/products/rose
LookX V3.0, the latest release of the architectural-focused text to image tool, is said to be able to generate visuals that are ‘virtually indistinguishable’ from real-world designs. The new AI model can focus on details such as intricate façades or complex structural elements and understand prompts better ■ www.lookx.ai
vidia DLSS 4, the latest release of the suite of neural rendering technologies that use AI to boost real time 3D performance, will soon be supported in visualisation software – D5 Render, Chaos Vantage and Unreal Engine.
The headline feature of DLSS 4, ‘Multi Frame Generation’, brings revolutionary performance versus traditional native rendering, according to Nvidia.
Multi Frame Generation is an evolution of Single Frame Generation, which was introduced in DLSS 3 to boost frame rates with Nvidia Ada Gen GPUs by using AI to generate a single frame between every pair of traditionally rendered frames.
In DLSS 4, Multi Frame Generation takes this one step further by using AI to generate up to three additional frames
between traditionally rendered frames. The feature is available exclusively on the new Blackwell-based Nvidia RTX 50 Series GPUs (see page WS40).
Multi Frame Generation can also work in tandem with other DLSS technologies including super resolution (where AI outputs a higher-resolution frame from a lower-resolution input) and ray reconstruction (where AI generates additional pixels for intensive ray-traced scenes). When these technologies are combined, it means 15 out of every 16 pixels are generated by AI — much faster than rendering pixels in the traditional way.
According to Nvidia, in D5 Render enabling DLSS 4 can lead to a four-fold increase in frame rates, leading to much smoother navigation of complex scenes.
■ www.nvidia.com
The Nemetschek Group has previewed AI Assistant, an AI-agent-based technology for Archicad that builds on the AI layer the Group announced in October 2024. The AI Assistant will be embedded directly into Archicad as an integrated AI chatbot, and there are plans to expand the integration to other Group brands.
According to Nemetschek, AI Assistant will streamline creative exploration while saving time and ensuring quality and compliance. It will feature product knowledge, industry insights, BIM model
queries, and the integration of AI Visualizer, a text-to-image generator powered by Stable Diffusion.
■ www.nemetschek.com
In AEC, AI rendering tools have already impressed, but AI model creation has not – so far. Martyn Day spoke with Greg Schleusner, director of design technology at HOK, to get his thoughts on the AI opportunity
One can’t help but be impressed by the current capabilities of many AI tools. Standout examples include Gemini from Google, ChatGPT from OpenAI, Musk’s Grok, Meta AI and now the new Chinese wunderkind, DeepSeek.
Many billions of dollars are being invested in hardware. Development teams around the globe are racing to create an artificial general intelligence, or AGI, to rival (and perhaps someday, surpass) human intelligence.
In the AEC sector, R&D teams within all of the major software vendors are hard at work on identifying uses for AI in this industry. And we’re seeing the emergence of start-ups claiming AI capabilities and hoping to beat the incumbents at their own game.
However, beyond the integration of ChatGPT front ends, or yet another AI renderer, we have yet to feel the promised power of AI in our everyday BIM tools.
The rendering race
The first and most notable application area for AI in the field of AEC has been rendering, with the likes of Midjourney, Stable Diffusion, Dall-E, Adobe Firefly and Sketch2Render all capturing the imaginations of architects.
While the price of admission has been low, challenges have included the need to specify words to describe an image (there is, it seems, a whole art to writing prompting strategies) and then somehow remain in control of its AI generation through subsequent iterations.
In this area, we’ve seen the use of LoRAs (Low Rank Adaptations), which implement trained concepts/styles and can ‘adapt’ to a base Stable Diffusion model, and ControlNet, which empowers precise and structural control to deliver impressive results in the right hands. For those wishing to dig further, we recommend familiarising yourself with the amazing work of Ismail Seleit and his custom-trained LoRAs combined with ControlNet (www.instagram.com/ismailseleit)
For those who’d prefer not to dive so deep into the tech, SketchUp Diffusion (www.tinyurl.com/SketchUp-Diffusion), Veras (www.evolvelab.io/veras), and AI Visualizer (for Archicad, Allplan and Vectorworkswww.tinyurl.com/AI-visualizer), have helped make AI rendering more consistent and likely to lead to repeatable results for the masses.
However, when it comes to AI ideation, at some point, architects would like to bring this into 3D – and there is no obvious way to do this. This work requires real skill, interpreting a 2D image into a Rhino model or Grasshopper script, as demonstrated by the work of Tim Fu at Studio Tim Fu (www.timfu.com).
It’s possible that AI could be used to auto-generate a 3D mesh from an AI conceptual image, but this remains a challenge, given the nature of AI image generation. There are some tools out there which are making some progress, by ana-
lysing the image to extract depth and spatial information, but the resultant mesh tends to come out as one lump, or as a bunch of meshes, incoherent for use as a BIM model or for downstream use.
Back in 2022, we tried taking 2D photos and AI-generated renderings from Hassan Ragab into 3D using an application called Kaedim (www.tinyurl.com/AEC-tsunami). But the results were pretty unusable, not least because at that time Kaedim had not been trained on architectural models and was more aimed at the games sector.
Of course, if you have multiple 2D images of a building, it is possible to recreate a model using photogrammetry and depth mapping.
Text to 3D
It’s possible that the idea of auto-generating models from 2D conceptual AI output will remain a dream. That said, there are now many applications coming
Stable Diffusion architectural image courtesy of James Gray. Generated with ModelMakerXL, a custom trained LoRA by Ismail Seleit. Follow Gray @ www.linkedin.com/ in/james-gray-bim
online that aim to provide the AI generation of 3D models from text-based input.
The idea here is that you simply describe in words the 3D model you want to create – a chair, a vase, a car – and AI will do the rest. AI algorithms are currently being trained on vast datasets of 3D models, 2D images and material libraries.
While 3D geometry has mainly been expressed through meshes, there have been innovations in modelling geometry with the development of Neural Radiance Fields (NeRFs) and Gaussian splats, which represent colour and light at any point in space, enabling the creation of photorealistic 3D models with greater detail and accuracy.
Today, we are seeing a high number of firms bringing ‘text to 3D’ solutions to market. Adobe Substance 3D Modeler has a plug-in for Photoshop that can perform text-to-3D. Similarly, Autodesk demonstrated similar technology — Project Bernini — at Autodesk University 2024 (www.tinyurl.com/AEC-AI-Autodesk)
However, the AI-generated output of these tools seems to be fairly basic — usually symmetrical objects and more aimed towards creating content for games.
In fact, the bias towards games content generation can be seen in many offerings. These include Tripo (www.tripo3d.ai), Kaedim (www.kaedim3d.com), Google DreamFusion (www.dreamfusion3d.github.io) and Luma AI Genie (www.lumalabs.ai/genie)
There are also several open source alternatives. These include Hunyuan3D-1 (www.tinyurl.com/Hunyuan3D), Nvidia’s Magic 3D (www.tinyurl.com/NV-Magic3D) and Edify (www.tinyurl.com/NV-edify)
Of course, the technology is evolving at an incredible pace. Indeed, Krea’s text to image model (www.krea.ai) and Hunyuan 3D’s image to 3D (www.tinyurl.com/hunyuan-3D) have just added promising 2D to 3D capabilities. With Krea, a user can click on an image of a car, automatically isolating from its forest background, then rotate it as if by magic. The software autogenerates the fully bitmapped 3D model. Some architects have shared their experiments on LinkedIn, with impressive results, producing non-symmetrical, complex forms from AI generated images - even creating interior layouts.
Louis Morion, architect at Architekturbüro Leinhäupl + Neuber, believes the art of image to 3D model is to create white architectural-style images with no textures, no noise to gives the best results.
When AEC Magazine spoke to Greg Schleusner of HOK on the subject of textto-3D, he highlighted D5 Render (www. aecmag.com/tag/d5-render), which is now an incredibly popular rendering tool in many AEC firms.
The application comes with an array of AI tools, to create materials, texture maps and atmosphere match from images. It supports AI scaling and has incorporated Meshy’s (www.meshy.ai) text-to-AI generator for creating content in-scene.
That means architects could add in simple content, such as chairs, desks, sofas and so on — via simple text input during the arch viz process. The items can be placed in-scene on surfaces with intelligent precision and are easily edited. It’s content on demand, as long as you can
when it comes to work that will be shown to clients. So, while there is certainly potential in these types of generative tools, mixing fantasy with reality in this way doesn’t come problem-free.
It may be possible to mix the various model generation technologies. As Schleusner put it: “What I’d really like to be able to do is to scan or build a photogrammetric interior using a 360-degree camera for a client and then selectively replace and augment the proposed new interior with new content, perhaps AI-created.”
Gaussian splat technology is getting good enough for this, he continued, while SLAM laser scan data is never dense enough. “However, I can’t put a Gaussian splat model inside Revit. In fact, none of the common design tools support that emerging reality capture technology, beyond scanning. In truth, they barely support meshes well.”
LLMs and AI
At the time of writing, DeepSeek has suddenly appeared like a meteor, seemingly out of nowhere, intent on ruining the business models of ChatGPT, Gemini and other providers of paid-for AI tools.
Schleusner was early into DeepSeek and has experimented with its script and code-writing capabilities, which he described as very impressive.
LLMs, like ChatGPT, can generate Python scripts to perform tasks in minutes, such as creating sample data, training machine learning models, and writing code to interact with 3D data.
Schleusner is finding that AI-generated code can accomplish these tasks relatively quickly and simply, without needing to write all the code from scratch himself.
“While the initial AI-generated code may not be perfect,” he explained, “the ability to further refine and customise the code is still valuable. DeepSeek is able to generate code that performs well, even on large or complex tasks.”
With AI, much of the expectation of customers centres on the addition of these new capabilities to existing design products. For instance, in the case of Forma, Autodesk claims the product uses machine learning for real-time analysis of sunlight, daylight, wind and microclimate.
However, if you listen to AI-proactive firms such as Microsoft, executives talk a lot about ‘AI agents’ and ‘operators’, built to assist firms and perform intelligent tasks on their behalf.
Microsoft CEO Satya Nadella is quoted
as saying, “Humans and swarms of AI agents will be the next frontier.” Another of his big statements is that, “AI will replace all software and will end software as a service.” If true, this promises to turn the entire software industry on its head.
Today’s software as a service, or SaaS, systems are proprietary databases/silos with hard-coded business logic. In an AI agent world, these boundaries would no longer exist. Instead, firms will run a multitude of agents, all performing business tasks and gathering data from any company database, files, email or website. In effect, if it’s connected, an AI agent can access it.
At the moment, to access certain formatted data, you have to open a specific application and maybe have deep knowledge to perform a range of tasks. An AI agent might transcend these limitations to get the information it needs to make decisions, taking action and achieving business-specific goals.
and accuracy are paramount. On the subject of AI agents, Schleusner said he has a very positive view of the potential for their application in architecture, especially in the automation of repetitive tasks. During our chat, he demonstrated how a simple AI agent might automate the process of generating something as simple as an expense report, extracting relevant information, both handwritten and printed from receipts.
He has also experimented by creating an AI agent for performing clash detection on two datasets, which contained only XYZ positions of object vertices. Without creating a model, the agent was able to identify if the objects were clash-
‘‘
programmatic way to interact with data and automate key processes. As he explained, “There’s a big opportunity to orchestrate specialised agents which could work together, for example, with one agent generating building layouts and another checking for clashes. In our proprietary world with restrictive APIs, AI agents can have direct access and bypass the limits on getting at our data sources.”
For the foreseeable future, AEC professionals can rest assured that AI, in its current state, is not going to totally replace any key roles — but it will make firms more productive.
The potential for AI to automate design, modelling and documentation is currently overstated, but as the technology matures, it will become a solid assistant. And yes, at some point years hence, AI with hard-coded knowledge will be able to automate some new aspects of design, but I think many of us will be retired before that happens. However, there are benefits to be had now and firms should be experimenting with AI tools.
AI agents could analyse vast amounts of data, such as a building designs, to predict structural integrity, immediately flag up if a BIM component causes a clash, and perhaps eventually generate architectural concepts
AI agents could analyse vast amounts of data, such as a building designs, to predict structural integrity, immediately flag up if a BIM component causes a clash, and perhaps eventually generate architectural concepts. They might also be able to streamline project management by automating routine tasks and providing realtime insights for decision-making.
The main problem is going to be data privacy, as AI agents require access to sensitive information in order to function effectively. Additionally, the transparency of AI decision-making processes remains a critical issue, particularly in high-stakes AEC projects where safety, compliance
ing or not. The files were never opened. This process could be running constantly in the background, as teams submitted components to a BIM model. AI agents could be a game-changer when it comes to simplifying data manipulation and automating repetitive tasks.
Another area where Schleusner feels that AI agents could be impactful is in the creation of customisable workflows, allowing practitioners to define the specific functions and data interactions they need in their business, rather than being limited by pre-built software interfaces and limited configuration workflows.
Most of today’s design and analysis tools have built-in limitations. Schleusner believes that AI agents could offer a more
We are so used to the concept of programmes and applications that it’s kind of hard to digest the notion of AI agents and their impact. Those familiar with scripting are probably also constrained by the notion that the script runs in a single environment. By contrast, AI agents work like ghosts, moving around connected business systems to gather, analyse, report, collaborate, prioritise, problem-solve and act continuously. The base level is a co-pilot that may work alongside a human performing tasks, all the way up to fully autonomous operation, uncovering data insights from complex systems that humans would have difficulty in identifying.
If the data security issues can be dealt with, firms may well end up with many strategic business AI agents running and performing small and large tasks, taking a lot of the donkey work from extracting value from company data, be that an Excel spreadsheet or a BIM model.
AI agents will be key IP tools for companies and will need management and monitoring. The first hurdle to overcome is realising that the nature of software, applications and data is going to change radically and in the not-too-distant future.
As we move into 2025, we ask five leading AEC software developers to share their observations and projections for BIM 2.0
The AECO industry has a lot to be proud of. You have constructed iconic skyscrapers, completed expansive highway systems, and restored historic monuments like the NotreDame de Paris.
But there’s more work to do. We hold the responsibility of designing and making our homes, workplaces, and communities. We must also solve for complex global challenges like housing growing populations and improving the resiliency of the built world to withstand the impacts of climate change.
Connected data is at the core of how we will solve these challenges. Better access to data will enable new ways of working that improve collaboration, productivity, and sustainability.
Today, AECO firms have more data than ever before, and their storage needs grow by 50% each year. While it’s beneficial to have every piece of information you could ever need about a project digitised, if the data is locked in files, teams can waste hours trying to find the specs for that third-floor utility closet door.
We’re at the start of the next major digital transformation for the AECO industry. And unlocking data’s value is the first step towards building a better future together.
The ongoing transformation of BIM will empower teams to define their desired project outcomes, like maximum cost or carbon impacts, from the earliest stages of design and planning. At Autodesk, we believe outcomebased BIM is the solution for smarter, more sustainable and resilient ways of designing and making the built environment.
This future starts with data that is granular, accessible, and open. The traditional silos that have long characterised AECO are breaking down, making way for a more connected approach. For example, teams in the design phase can inform product and system performance criteria as documented in specifications – such as which HVAC systems meet the project’s sustainability and energy efficiency requirements. This aids the contractor in making the most informed decision on which product gets selected and installed. And then, in the operations phase, owners would have the spec data on hand to measure the asset’s performance to understand if it achieved its target energy usage. The benefits of enhanced data accessibility across the asset lifecycle are truly unlimited.
Just last year, we launched the AECO Data Model API, an open and extensible solution that allows data to flow across project phases, stakeholders, and asset types. Teams save time by eliminating manual and error-prone extraction of model data. And access to
project data is democratised, leading to better decisions, increased transparency, and trust. This vision is how we’ll unlock the future of BIM. In the connected, cloud-based world of granular data, teams will be able to move a project from one tool to another, and across production environments, with all their data in context. Designers will no longer have to re-create the same pump multiple times when it’s already been built by another designer. Contractors won’t need to save that pump’s spec data to different spreadsheets, and risk losing track of which version is approved.
Throughout this next year, we predict that more design and make technology companies will embrace openness and interoperability to support seamless data sharing.
Data to connect design and construction Connected data will help our industry understand the health and performance of their business. In fact, companies that lead in leveraging data see a 50% increase in average profit growth rate compared to beginners. Data is especially valuable in bridging the gap between design and construction. With Autodesk Docs, our common data environment, we are connecting data across different phases of BIM. It’s a source of truth for bringing granular data and files together from design to construction. With a digital thread that connects every stage of the project lifecycle, teams can course-correct early and often to save time, money, and waste.
A great example of this is WSP’s work on the Manchester Airport. The team designed a ‘kit of parts’ that includes a lift, staircase, lobbies, and openings for the air bridge as part of the new Pier 2 construction. By utilising the kit-of-parts, an application of industrialised construction, and rationalising design into fewer, much larger assemblies, WSP with contractor Mace significantly reduced the duration of work onsite. The approach also reduced the amount of construction waste. This process was made possible through the seamless transfer of data from design to construction.
As we transition to more cloud-connected workflows, we’ll see more use cases of AI generated insights to inform design, engineering, and construction at the start of projects to help teams achieve desired outcomes such as minimising carbon impacts. For example, firms like Stantec are using AI-powered solutions to understand and test in real-time the embodied carbon impacts of their material design decisions from day one. This is significant because early concept planning for buildings offers the greatest opportunity for impact on carbon and the lowest cost risk for design changes.
As AI continues to progress in daily applications, it will enable our industry to optimise the next factory, school, or rail system. Because with data, AI knows the past and can help lead to a more sustainable future.
The expanding impact of legislation
To realise the untapped value of our data, it is critical to remember that it all starts with getting data connected and structured in one place.
In fact, another trend we predict in 2025 is that information requirements legislation will continue to grow with the recent introduction of the EU’s Digital Passport Initiative. Alongside existing mandates like ISO-19650, being able to classify, track and validate data across the asset lifecycle will become essential to successfully deliver on projects.
These regulations mean that AECO firms will need to invest in a Common Data Environment that will support their firm’s ability to track, manage and control project data at the granular level.
Data to supercharge AI progress
Data and AI have a symbiotic relationship. Better data – both in quantity and quality – is the fuel to unlocking the potential of AI and improving workflows.
Over the next year, we expect to see more practical uses of AI continue to make big strides, such as the dayto-day applications of AI that solve real-world problems. The industry is ready, as 44% of AECO professionals view improving productivity a top use case for AI.
The AECO industry is poised for a datadriven transformation. Over the next year, we’ll see continued shifts towards connected data that will help us achieve new levels of innovation, sustainability, and resiliency for the built environment. Firms that embrace granular, datacentric ways of working will be able to use this information from the office to the job site and share just the right amount of data with collaborators anywhere in the world and with any tool you choose.
The journey ahead is full of opportunities, and together, we can shape the AECO industry’s future for the better.
Today’s infrastructure projects are becoming more complex. Demand for better, more resilient infrastructure is increasing in the face of rapid urbanisation, climate change, the energy transition, and more. The sheer scale of data created from design to construction to operations makes infrastructure a prime area for AI disruption. AI is not just a trend, but a transformative force that will shape the AEC industry and the built environment, paving the way for smarter, more efficient project delivery and asset performance.
Of course, AI isn’t new to infrastructure sectors. We recognise its potential to process vast amounts of data to provide insights that were previously unattainable. Because more than 95% of the infrastructure that will be in use by 2030 already exists today, owner-operators need to ensure existing infrastructure is resilient, efficient, and capable of meeting current and future demands. AI-driven asset analytics generate insights into the condition of existing infrastructure assets, while eliminating costly, manual activities. AI allows operators to predict when maintenance is needed before failures occur. AI agents analyse digital twins of infrastructure assets—bridges, roads, dams, or water networks—to identify issues and recommend preventive action, avoiding costly breakdowns or safety hazards.
But when we take a step back, AI also has huge potential in the design phase of the infrastructure lifecycle. In design, AI can automate repetitive tasks—such as documentation and annotation—so that engineers can focus on higher-value activities.
For example, through a copilot, professionals can quickly create, revise, and interact with requirements documentation and 3D site models through natural language to automatically make real-time design changes with precision and ease. Or, with a design agent, they can evaluate thousands of layout options and suggest alternative designs in real-time, helping them make better design decisions sooner, saving time and money. We have calculated that users can accelerate drawing production by up to ten times, and improve drawing accuracy using AIpowered annotation, labelling, and sheeting that automatically places labels and dimensions according to organisational standards that are optimised for legibility and aesthetics.
systems. As we look to the future, the possibilities seem endless. But to begin to understand what’s possible for tomorrow, we need to be able to harness data – the foundation of AI.
For the effectiveness of AI to take shape, we need to leverage the power of open data ecosystems. Open ecosystems break down barriers and facilitate seamless data exchange across platforms, systems, disciplines, organisations, and people. They ensure secure information flow and collaboration are unimpeded, without vendor lock-in, and preserve context and meaning— ultimately enabling more effective AI-driven analysis and decision-making over the infrastructure’s lifecycle.
networks and assets. After all, infrastructure is of geospatial scale.
A 3D geospatial view changes the vantage point of an infrastructure digital twin from the engineering model to planet Earth— geolocating the engineering model, and all the necessary data about the surrounding built and natural environment. It enables a comprehensive digital twin of both the built and natural environment, with astonishing user experiences and scale, from millimetre-accurate details of individual assets to vast information about widespread infrastructure networks
AI’s true power will be measured by its ability to improve outcomes—more sustainable designs, faster and safer builds, and more reliable infrastructure
This digital thread allows users to connect and align data from various sources—from the engineering model to the subsurface and from enterprise information to operational data, such as IoT sensors and more—to provide the full context needed for smarter decision-making. Still, that is not enough. To truly unlock the value of AI, the digital twin must be augmented by 3D geospatial capabilities and intelligence. A 3D geospatial view is the most intuitive way for owner-operators and engineering services providers to search for and query information about infrastructure
By adding AI, to a data-rich, digital twin of the built and natural environment, we can create better and more resilient infrastructure. AI-driven automation, detection, and optimisation can take organisational performance and data-driven decision-making to new levels throughout the lifecycle of a project or asset. Generative AI can help significantly boost productivity and accuracy, while machine learning algorithms can identify inefficiencies, forecast maintenance, and suggest design modifications before physical construction commences.
This powerful combination will unlock unprecedented efficiency, sustainability, and resilience, transforming how we design, build, and maintain the world around us. With open data ecosystems fostering limitless innovation and AI continuously powering and automating 3D-contextualised digital twins, we are entering an era of smarter infrastructure.
Our mission at Graphisoft has always been to empower architects with the tools they need to bring their visions to life. As we look ahead to 2025 and beyond, I am more excited than ever about the transformative impact of emerging technologies like AI, BIM, and cloud-based collaboration. These innovations will reshape how design teams work, pushing the boundaries of what’s possible.
One of the most thrilling changes on the horizon is the growing influence of AI in architecture. We’re already seeing the early signs of this shift, and it’s clear that we’re just scratching the surface. The first wave of AI tools and capabilities that have emerged have introduced inspiring and time-saving capabilities — helping designers generate and refine initial concepts quickly or automate repetitive tasks. AI concept visualisers are already being used in practice. Capabilities like automated drawings are predicted to augment creative workflows by automatically completing repeatable tasks with increasing accuracy and quality. Through beta releases followed by full product integration, teams have used these first use cases to stress test AI development layers.
In the short term, I expect to see more sophisticated AI capabilities built on such layers emerge en masse, both as standalone tools and integrated solutions that tap into existing data repositories, enhancing everything from design to analysis.
Zooming out even further, we will also see AI agents evolve. More than an automation engine for repetitive tasks, the vision for AI agents is to act autonomously, proactively interact, solve problems, and execute more complex workflows.
Imagine that you have a project that is far along in the design and refinement stage – suddenly, as it inevitably does, a new development necessitates a change in the original design. An AI agent will be able to lead that change from the original file across all affected assets and communication touchpoints automatically, eliminating all the manual redrawing, communication, and quality assurance checks that later-stage design changes usually require. A mature version of this vision will have the power to disrupt the whole construction value chain, and it’s one of the most exciting emerging trends to keep an eye on.
We can expect AI capabilities to be applied to sustainability problems, another industry topic that has grown in urgency.
AI’s ability to process vast amounts of data offers architects the chance to make informed decisions early in the design process, supporting more efficient, more environmentally conscious design results. It’s this front of the technology that will
effectively impact the 50% of global carbon emissions that buildings are responsible for. And it’s in this direction that we’ll see the technology take as data-driven design begins to inform initial concepts with climate, location, light, and other variables to shape the design essence of projects.
With the power of data-driven scenario building capabilities, designers and clients can explore new project opportunities in greater detail and research multiple models in reuse and refurbishment projects. Wellknown design challenges, such as how to repurpose large sports arenas or shopping centres, will benefit from value-driven evaluations supported by simulations that factor in the surrounding environment.
Applying insights using historical data from similar projects will also fuel sustainability efforts. Compounded learnings from hundreds of thousands of buildings become valuable data points for intelligent design systems. With the help of AI, architects can tap into this historical data to optimise future buildings, enhance energy efficiency, cut down on waste, and reduce environmental impact. By learning from past successes, we can make smarter decisions and avoid repeating the same mistakes. Further along the project lifecycle, the intelligent application of historical data will augment the power of digital twins to support even more sustainable and efficient facilities management.
Collaboration is another area undergoing rapid changes. Remote and hybrid teamwork solutions boosted in the past years have continued to evolve. Cloud-based platforms like BIMcloud are revolutionising how design teams work together, connecting teammates in different cities in real time and setting the stage for more agile collaboration. The result is teams that are more dispersed yet also more aligned, increasingly efficient project cycles, higher accuracy rates, and a better-built environment.
stages of the project lifecycle, significantly reducing errors and inefficient rework. By keeping stakeholders on the same page using a shared visual language, communication barriers dissolve and the stage is set for more creative input to fuel the design. As ideas and concepts are communicated more clearly with the aid of translation and collaborative design tools, AI and different stakeholders are better poised to explore unique building design ideas, merging the ‘best of both (or more) worlds,’ fusing solutions from different regions to create sustainable and fascinating results.
To see the benefits of visual collaboration in the long term, design directors need to select the right tools for the right job. At Graphisoft, we emphasise continuity in all our products, ensuring they remain relevant. In addition to ensuring compatibility with all operating systems and design platforms, future-proof products can integrate emerging technology into a seamless user experience. Curating a value-driven toolset — versus a tech-driven one — is essential to capturing the benefits of visual, data-driven collaboration.
As we look ahead to the next three to five years in the industry, it’s worthwhile to look back first. The volatility disrupted supply chains, and remote work scramble of previous years have created a stronger sense of resilience and confidence in the industry. In many ways, this has been a turning point, and I believe that one of the most critical investments companies can make is in continuous training for their employees.
Client relationships will also change. As AI advances solve previous rework and time constraint challenges, the emphasis will shift to maintaining tighter communication with clients from concept to handover. Educating and onboarding stakeholders can maximise technological gains but also requires a new collaborative mindset and fluency with shared communication infrastructure.
In dispersed, crosscultural environments, seeing is believing. Mobile, deviceagnostic viewing and collaboration tools bridge teams at various
We view AI as a tool to extend human ingenuity, not replace it. As with any set of tools, using it correctly will require a set of skills, especially as use cases and individual tools mature and grow in complexity. The opportunities that autonomous and intelligent design present are only beginning to emerge, and this is the time to get in ‘on the ground floor.’ Just as we invest in the continuity of our products and services, leaders should also invest in their teams and hone skillsets that keep pace with these exciting technical developments.
The future of architecture is incredibly exciting, and I do not doubt that with the right tools, training, and mindset, we can create a built environment that is not only more efficient and sustainable but also more aligned with the needs of tomorrow. This is our chance to leave a lasting, positive impact on the world. invest
By some measures, the pace of change in the AEC industry is driving more change, more innovation and more complexity than at any time in history. It’s also safe to say that the pace of change will never again be as slow as it is today.
The arsenal of technologies AEC professionals use to do their jobs continues to grow. Modern projects have many stakeholders, each providing highly specialised contributions to a project and generating valuable project data. With each stakeholder employing a different stack of hardware and software, each with its own data formats and validation, permissions and security parameters, pulling the data thread across disciplines, systems and project phases has become increasingly arduous. At the same time, it’s never been more clear that consolidating project information is a critical problem to solve.
Just like every other project phase, design requires a new focus on openness and interoperability, allowing connected teams to continue using those models throughout projects and across the complete asset lifecycle – from design to build to maintain. A connected ecosystem allows everyone throughout the asset lifecycle to work with their preferred tools, minimise rework, make informed decisions and collaborate effectively.
From inflexible and siloed to open, connected ecosystems
stakeholders and the asset lifecycle.
At Trimble, we’re creating integrations between our products and third-party solutions. We’re also providing tools for construction businesses to build new integrations to solve further data challenges. Over 100 preconfigured integrations are available on Trimble Marketplace, and Trimble App Xchange empowers software developers to build integrations for their customers. App Xchange and Trimble Marketplace are key parts of our commitment to facilitating open, interoperable systems and an automated flow of data between solutions from Trimble and other software vendors.
‘‘ AI agents will reflect and iterate, plan ahead by defining priorities and tasks, and access tools and real-time data, such as information provided by in-field sensors ’’
Compliance with accepted standards is another area of responsibility for tech providers. Trimble software and platforms enable the import and export of a wide range of product-specific and global file formats, including IFC. In addition to our strategic advisory role with BuildingSmart, Trimble also joined the AOUSD Alliance, which is dedicated to promoting the interoperability of 3D content through Universal Scene Description (OpenUSD), helping improve interoperability between SketchUp and other platforms, like Nvidia Omniverse Cloud, for more advanced visualisation.
In the grand scheme of things, no single tech vendor can – or probably should –provide it all. As a result, we’re seeing a broader industry shift away from inflexible, closed technology suites to more open and connected technology ecosystems.
In the field, models can be accessed in Trimble Connect and with an augmented reality system, such as Trimble Sitevision, workers can place and view georeferenced 3D models — above or below the ground to accurately install or validate field assembly.
Interoperability will unlock emerging tech
We’ve already begun to see the value of AI in AEC. Trimble and other vendors are incorporating AI into customer workflows to enhance decision-making and creativity, and automate repetitive tasks. From automated scan-to-3D workflows to generative design solutions like SketchUp Diffusion, which produces stylised renderings in seconds based on presets or natural language text prompts, AI is a catalyst for creativity and an engine for productivity.
Like BIM, unlocking transformative value from the next wave of AI will depend on interoperability and high-quality data. With more data moving between systems and across disciplines, we can explore advanced forms of AI, such as AI agents. Unlike generative AI, an AI agent works independently, with little to no human interaction, to perform a specific task. AI agents will reflect and iterate, plan ahead by defining priorities and tasks, and access tools and real-time data, such as information provided by in-field sensors.
Numerous agents across disciplines may use the same project data to perform their unique tasks. For example, various agents would use scan data to assist with separate tasks related to estimating, project management, and scheduling. To use this data across different project phases and disciplines, various AI agents must be able to access and understand the data, regardless of which system it originates from.
work continues
The free movement of data between systems and workflows is the next great productivity breakthrough in AEC. The industry is shifting away from inflexible, closed technology suites to more open and connected ecosystems. Project teams and project owners are demanding the benefits that interoperability brings – not just to task optimisation, but as a way to optimise the entire asset lifecycle.
It’s not a lack of innovation that has the construction industry mired in its low productivity index. Many project teams are actually quick to adopt new technologies and techniques. The problem is the level at which that innovation is happening. Sure, we can make a task easier to do, but the benefit stops right there, with that stakeholder. The next-level challenges are going to be solved through improved data flow across projects,
When we can meet people where they are, with the tools they’re already using, we begin to see the real power of data. For example, an architect uses one tool for conceptual design and then brings that data into their preferred BIM solution to create an architectural model. The architectural model is then used as a reference file to build detailed structural engineering models to automate the fabrication of components. In a connected data environment, such as Trimble Connect, working models can be available to everyone who needs them.
Projects today are far more sophisticated and data-driven than at any time in history, and the need for connected data has never been clearer. The industry has made strides toward more connected workflows and data sharing, but the
As the ecosystem of AEC hardware and software expands and becomes more complex, interoperability will be the key to unlocking the value of data, extending the value of BIM and maximising emerging technologies.
Design transformed: 2025 predictions from Vectorworks
With 2025 in full swing, the AEC industry is at the forefront of a technological revolution, driven by rapid advancements in artificial intelligence (AI), immersive visualisation tools, and a commitment to sustainable design. These innovations are reshaping the tools architects and designers use and influencing how they think about productivity, creativity, and environmental stewardship. Below, I’ll share some insights into the key trends that are set to redefine the industry and recommendations for design leaders for the year ahead.
Artificial Intelligence: a creative ally AI continues to evolve as a pivotal tool for the AEC industry. It is not a replacement for creative professionals but a powerful assistant that enables them to focus more on design and creativity. In recent years, AI has proven its potential to handle time-consuming tasks such as automating project schedules, optimising workflows, and generating accurate documentation. This allows architects and designers to direct their energy toward conceptual development and problem-solving.
In 2025, AI is expected to go beyond visualisation and become deeply integrated into the design process. Generative design tools powered by AI will enable professionals to explore innovative forms and solutions that were once unimaginable. These tools will enhance creativity and streamline processes, making it easier to meet tight deadlines and client expectations.
BIM is becoming a standard practice for firms of all sizes, transforming how projects are planned, coordinated, and executed. According to the AIA Firm Survey Report 2024, BIM adoption has surged postpandemic, with mid-sized and large firms leading the way. However, smaller firms are steadily recognising its benefits, particularly in improving workflows and addressing specific project challenges.
In 2025, BIM will continue to be instrumental in achieving sustainability goals. Tools embedded in BIM software now allow designers to conduct energy efficiency analyses and carbon footprint assessments early in the design phase. This capability is crucial as the industry works toward net-zero emission targets for the building sector by 2040. Software providers are refining BIM features to prioritise real-time coordination and seamless documentation. The emphasis on usability ensures that BIM tools are accessible, enabling architects to design smarter and more sustainably. Looking ahead, I anticipate a rise in BIM-driven projects that meet high-performance standards and redefine collaboration among multidisciplinary teams.
Client expectations for project visualisation have shifted dramatically. While traditional blueprints and 2D documentation remain relevant, immersive technologies such as AR and VR are becoming essential for client engagement. These tools offer an unprecedented level of interactivity, allowing clients to walk through their designs and provide informed feedback virtually.
In 2025, I predict a significant expansion of immersive tools tailored to client collaboration. Many platforms share interactive 3D models, panoramic images, and project files more efficiently in realtime. These tools “wow” clients and foster better decision-making and collaboration. As immersive technologies become more mainstream, they will redefine how architects and clients work together, creating a more transparent and engaging design process.
The urgency of climate change demands a radical shift in how architects and designers approach their work. According to the World Meteorological Organisation’s 2024 State of Climate Services report, the past decade has been the warmest, highlighting the need for immediate action. Sustainability has transitioned from being a priority to an industry imperative.
Innovative approaches like wood construction and adaptive reuse are gaining traction as effective ways to reduce a building’s environmental impact. Technologies such as cross-laminated timber (CLT) offer a sustainable alternative to concrete, while adaptive reuse preserves existing structures, conserving resources and minimising waste.
Tools like the Vectorworks Embodied Carbon Calculator enable designers to measure and reduce the carbon footprint of their projects. By integrating data-driven decision-making into the design process, architects can make more sustainable material choices and meet stringent climate goals. Looking ahead, I anticipate the introduction of new metrics and sustainability dashboards that will allow designers to visualise the environmental impact of their decisions in real time, further solidifying sustainability as a core tenet of architectural practice.
The technological advancements outlined above are not just reshaping tools but also redefining roles within the AEC industry. Architects and designers are increasingly becoming collaborators, facilitators, and problem solvers tasked with balancing creativity, functionality, and sustainability. This shift requires a holistic approach to design that considers the needs of clients, the environment, and future generations.
For firms, this means investing in continuous learning and upskilling. Professionals must stay abreast of new technologies and
methodologies to remain competitive. As the industry evolves, I foresee a growing emphasis on interdisciplinary collaboration, with architects, engineers, software developers, and sustainability experts working together to create innovative solutions.
To prepare for the future, design leaders should prioritise evolving their tech stack, optimising workflows, and cultivating staff skills to meet emerging industry challenges. Investing in interoperability and cloud-based collaboration ensures seamless data exchange and resilience while integrating AI and machine learning can automate repetitive tasks and enhance design optimisation. Embracing sustainability tools to track energy efficiency and carbon footprints will align with growing client and regulatory demands. Workflows should be streamlined for agility by adopting unified processes, enabling large-scale design iteration, and leveraging technologies like digital twins and data-driven design tools to refine outcomes efficiently. Simultaneously, fostering staff expertise will be critical. Cross-disciplinary knowledge in areas like engineering and environmental science can enhance collaboration, while training in emerging technologies such as AR/ VR, BIM, and AI will ensure teams remain competitive. Soft skills like adaptability, communication, and leadership development are equally important as teams grow more diverse. Cultivating sustainability expertise and preparing for leadership transitions will further position firms for long-term success.
Embracing innovation to shape tomorrow
The AEC industry stands at a transformative juncture, with developing technologies and sustainability goals paving the way for a brighter future. By embracing these trends, architects and designers can push the boundaries of what’s possible, creating innovative, functional, and environmentally responsible spaces.
Finally, staying ahead of industry trends is essential. Monitoring evolving regulations, leveraging predictive analytics, and understanding client expectations can inform strategic decision-making. These strategies collectively enable leadership to produce and deliver innovative, impactful designs that address both present needs and future possibilities, and ensure your firm remains agile in a dynamic market. To see how we’re keeping pace with industry demands, visit our public roadmap (www.vectorworks.net/public-roadmap), where we share insights into the innovations we’re prioritising. As we progress into 2025, I am excited to witness how these advancements will shape the industry and the built environment. The journey ahead promises challenges but also immense potential for creativity, collaboration, and meaningful impact. Together, we can redefine the future of design, leaving a lasting legacy for generations.
Twinview aggregates information from multiple sources, so you can access all your data from a single platform to create a holistic view of your building.
Visualise static and dynamic building data on customisable dashboards allowing real-time analysis and optimisation of building performance.
Streamline your facility management processes, reduce downtime, implement predictive maintaince and improve resource allocation.
Towards the end of 2024, software developer Hypar released a whole new take on its cloud-based design tool, focused on space planning and with a cool new web interface. Martyn Day spoke with Hypar co-founder Ian Keough to get the inside track on this apparent pivot
Founded in 2018 by Anthony Hauck and Ian Keough, Hypar has certainly been on a journey in terms of its public-facing aims and capabilities.
Both co-founders are well-established figures in the software field. Hauck previously led Revit’s product development and pioneered Autodesk’s generative design initiatives. Keough, meanwhile, is widely recognised as the creator of Dynamo, a visual programming platform for Revit.
Initially, their creation Hypar looked very much like a single, large sandpit for generative designers familiar with scripting, enabling them to create system-level design applications, as well as for nonprogrammers looking to rapidly generate layouts, duct routing and design variations and get feedback on key metrics, which could then be exported to Revit.
Back in 2023, we were blown away with Hypar’s integration of ChatGPT at the front end. (www.tinyurl.com/AEC-Hypar). This aimed to give users the ability to rapidly generate conceptual buildings and then progress
on to fabrication-level models. This capability was subsequently demonstrated in tandem with DPR Construction.
One year later and the company’s front end has changed yet again. With a whole new interface and a range of capabilities specifically focused on space planning and layout, it feels as if Hypar has made a big pivot. What was once the realm of scripters now looks very much like a cloud planning tool that could be used by anyone.
AEC Magazine’s Martyn Day caught up with the always insightful Ian Keough to discuss Hypar’s development and better understand what seems like a change in direction at the company, as well as to get his more general views on AEC development trends.
Martyn Day: Developers such as Arcol, Snaptrude and Qonic are all aiming firmly at Revit, albeit coming at the market from different directions and picking their own entry points in the workflow
to add value, while supporting RVT. Since Revit is so broad, it seems clear that it will take years before any of these newer products are feature-comparable with Revit, and all these companies have different takes on how to get there. With that in mind, how do you define a nextgeneration design tool and what is Hypar’s strategy in this regard?
Ian Keough: At Hypar, we’ve been thinking about this problem for five or six years from a fundamentally different place. Our very first pitch deck for Hypar showed images from work done in the 1960s at MIT, when they were starting to imagine what computers would be used for in design. They weren’t imagining that computers would be used for drafting, of course. Ivan Sutherland had already done that years before and we have all seen those images.
What they were imagining is that computers would be used to design buildings,
and they were making punch card programmes to lay out hospitals and stuff and that. To me, that’s a very pro-future kind of vision. It imagined that computing capacity would grow to a point where the computer would become a partner in the process of design, as opposed to a slightly better version of the drafting board.
However, when it eventually happened, AutoCAD was released in the 1980s and instead we took the other fork of history. The result of taking that other fork has been interesting. If you look at this from a historic perspective, computers did what they did and they got massively more powerful over the years. But the small layer on top of that was all of our CAD software, which used very little of that available computing power. In a real sense, it used the local CPU, but not the computing power of all the data centres around the world which have come online. We were not leveraging that compute power to help us design more efficiently, more quickly, more correctly. We were just complaining that we couldn’t visualise giant models, and that’s still a thing that people talk about.
That’s still a big problem for people’s workloads. I don’t want to dismiss it. If you’re building an airport, you have got to load it, federate all of these models and be able to visualise it. I get that problem. But the larger problem is that,
to get to that giant model that you’re complaining about, there are many, many years of labour, of people building in sticks-and-bricks models. How many airports have we designed in the history of human civilisation?
So, thinking about the fork we face –and I think we’re experiencing a ‘come to Jesus’ moment here – people are now seeing AI. As a result, they’re getting equal parts hopeful that it will suddenly, at a snap of the fingers, remove all the toil that they’re experiencing in building these bigger and bigger and more complicated models, and equal parts afraid that it will embody all the expertise that is in their heads, and will leave them out of a job!
Martyn Day: I can envisage a time where AI can design a building in detail, but I can’t see it happening in our lifetime. What are your thoughts?
Ian Keough: I don’t think that’s the goal. I don’t think that’s the goal of anybody out there – even the people who I think have the most interesting and compelling ideas around AI and architecture. But I do think there are a lot of people who have very uninteresting ideas around AI in architecture, and those involve things like using AI to generate renderings and stuff like that. It’s nifty to look at, but it’s so low value in terms of the larger story of what all this computing power could do for us.
product directly follows the ‘text-to-BIM thing’, because what the ‘text-to-BIM thing’ showed us is that we have this very powerful platform.
‘‘
The new Hypar 2.0, which was released in September 2024, and more specifically, the layout suggestions capability (www. tinyurl.com/layout-suggest), was our first nod towards AI-infused capabilities. The platform is all about seeing if we can make a design tool that’s a design tool first and foremost.
I think there are a lot of people who have very uninteresting ideas around AI in architecture, and those involve things like using AI to generate renderings and stuff like that. It’s nifty to look at, but it’s so low value in terms of the larger story of what all this computing power could do for us
Ian Keough ’’
At AEC Magazine, you’ve already written about experiments that we’ve conducted in terms of designing through our chat prompt/text-to-BIM capability. So, we took the summation of the five years of work that we have done on Hypar as a platform, the compute infrastructure and, when LLMs came along, Andrew Heumann on our team suggested it would be cool if we could see if we could map human natural language down into input parameters for our generative system.
We did that. We put it out there. And everybody got really, really excited. But we quickly realised the limitations of that system. It’s very, very hard to design anything real through a check prompt. It’s one thing to generate an image of a building. It’s another thing to generate a building.
You’ll see in the history of Hypar that the creation of this new version of the
The problem with AI-generated rendering is you get what you get, and you can’t really change it, except for changing that prompt, and you’re totally out of control. What designers want is control. They want to be able to move quickly and to be able to control the design and understand the input parameters design. Hypar 2.0 is really about that. It’s about how you create a design tool and then lift all of this compute and seamlessly integrate it with the design experience, so that computation is not some other experience on top of your model.
Martyn Day: Historically, we have been used to seeing Hypar perform rapid conceptual modelling through scripting, generate building systems and be capable of multiple levels of detail to quickly model and then swap out to scale fidelity. The whole Hypar experience, looking at the website now, seems to be about space planning. Would you agree?
Ian Keough: That’s the head-scratcher for a lot of people when it comes to this new version. People who have seen me present on the work we did with DPR and other firms to make these incredibly detailed and sophisticated building systems are saying, “Wait, now you’re a space planning software now?”
That may seem like a little bit of a left turn. But the mission continues to enable anyone to build really richly detailed models from simple primitives without extra effort. We do this in the same way that we could take a low-resolution Revit wall and turn it into a fully clad DPR drywall layout, including all the fabrication instructions and the robotic layout instructions that go on the floor, and everything else. That capability still lives in
Hypar, underneath the new interface. What we are doing is getting back to software that solves real problems, again. This is a very gross simplification of what’s going on, but what problem does Revit actually solve? The answer is drawings, documentation. That’s the problem that Revit solves today and has solved since the beginning. What it does not solve is the problem of how to turn an Excel spreadsheet that represents a financial model into the plan for a hospital. It does not solve that at all. That is solved by human labour and human intellect. And right now, it’s solved in a very haphazard way, because the software doesn’t help you. It doesn’t offer you any affordances to help you do that. Everybody is largely either doing this as cockamamie-crazy, nested-family Lego blocks and jelly cubes in Revit, or trying to do it as just a bunch of coloured polygons in Bluebeam. That’s not how we’re utilising compute.
At the end of a design tool, it is still the architect’s experience and intellect that creates a building. What the design tool should do is remove all of the toil.
that knowledge in the form of spaces, specific spaces, and all the stuff that’s in a space and the reasons for that stuff being there. And then they just want to transfer that knowledge from one project to another, whether it’s a healthcare project or any other kind of project that they’ve carried out before.
At the beginning of defining the next version of Hypar, when we started talking with architects about this problem, I was amazed by the cleverness of the architects. They’re actually finding solutions to do this with the software they have now. They build these giant, elaborate Revit models with hundreds of standard room types in them, and then they have people open those Revit models and copy and paste out stuff from the library.
I had one guy who referred to his model as ‘the Dewey Decimal System’. He had grids in Revit numbered in the Dewey Decimal System manner, such
about tasks that need different levels of abstraction and multiple levels of scale, depending on the task. Can you explain how this functions in Hypar?
Ian Keough: You’ll notice in the new version of Hypar that there’s something called ‘bubble mode’. It’s a diagram mode for drawing spaces, but you’re drawing them in this kind of diagrammatic, ‘bubbly’ way.
That was an insight that we gleaned from spending literally hundreds of hours watching architects at the very early stage of designing buildings. They would use that way of communicating when they were doing departmental layout or whatever. They were hacking tools like Miro and other things, where they were having these conversations to do this stuff. But it was never at scale.
‘‘ Why isn’t it possible in Revit to select a room and save it as a standard, so the next time I put a room tag in that set exam room, such as a paediatric exam room, it just infills it with what I’ve done for the last ten projects ’’
To give you an example of this, now that we’ve reached a point where users can use our software in a certain production context, to create these larger space plans, they’re starting to ask for the next layer of capabilities such as clearances as a semantic concept. This is the idea that, if I’m sitting at this desk, there should be a clearance in front of this desk, so that people have enough room to walk by. Sometimes, clearances are driven by code – so why has no piece of architectural design software in the last 20 years had a semantic notion of a clearance that you could either set specifically or derive from code? You might be able to write a checker in Solibri in the postdesign phase, but what about the designer at the point of creating the model?
Clearances are just one example. There are plenty of others, but the other impetus for a lot of what we’re doing right now is the fact that organisations like HOK have a vast storehouse of encoded design knowledge, in the form of all of the work that they’ve done in the past. Often, they cannot reuse this knowledge, except by way of hiring architects and transmitting this expertise from one person to the next, in a form that we have used for thousands of years – by storytelling, right? What firms want is a way to capture
that he could insert new standards into this crazy grid system. And he referred to them by their grid locations.
In other words, architects have overcome the limitations that we’ve put in place in terms of software. But why isn’t it possible in Revit to select a room and save it as a standard, so the next time I put a room tag in that set exam room, such as a paediatric exam room, it just infills it with what I’ve done for the last ten projects.
To get back to your question about what the next generation looks like, I guess the simplest way to explain how we’re approaching it is that we’re picking a problem to solve that’s at the heart of designing buildings. It’s at the moment of creation, literally, of a building. We want to solve that problem and use software as a way to accelerate the designer, rather than a way to demonstrate that we can visualise larger models. That will come in time, but really, we want to use this vast computational resource that we have to undergird this sort of design, and make a great, snappy, fun design tool.
Martyn Day: Old BIM systems are oneway streets. They are about building a detailed model to produce drawings. But you have gone on record talking
We were already thinking of this idea of being able to move them from lowlevel detail to a high level of detail without extra effort by means of leveraging compute. Now, in Hypar, and I’ll admit the bits are not totally connected yet in this idea, you’ll notice that people will start planning in this bubble mode, and then they’ll have conversations around bubble mode, at that level of detail.
Meanwhile, the software is already working behind the scenes, creating a network of rooms for them. And then they’ll perform the next step and use this clever stuff to intelligently lay out those rooms, the contents in the rooms. The next level of detail passed that will be connectors to other building systems, so let’s generate the building system. There’s this continuous thread that follows levels of detail from diagram to space – to spaces with equipment and furniture and to building systems.
Martyn Day: We have seen Hypar focus on conceptual work, space planning, fabrication-level modelling. Is the goal here to try and tackle every design phase?
Ian Keough: We’re marching there. The great thing about this is that there’s already value in what we offer. This is something that I think start-ups need to think about. You’re solving a problem, and if you want to make any money at all, that problem needs to have value at every point along the trajectory. That’s unless you raise a ton of capital, and say,
‘Ten years from now, we’ll have something that does everything.’
The reality is at day five, after you’ve built some software, and you put it in customers’ hands, that thing has to have value for them. The good news is that just in the way that we design buildings now, from low-level detail to high-level detail, there’s value in all those places along the design journey.
The other thing that I think is going to happen, to achieve what we’ve been envisioning since the beginning of Hypar, is fully generated buildings. I do not believe in the idea that there’s this zerosum game that we’re all playing, where somebody’s going to build the one thing that ‘owns the universe’.
This is a popular construct in people’s minds, because they love this notion of somebody coming along and slaying the dragon of Revit in some way, and replacing it with another dragon.
What’s going to happen is, in the same way that we see with massively connected systems of apps on your phone and on the internet, these things are going to talk to each other. It’s quite possible that the API of the future for generating electrical systems is going to be owned by a developer like Augmenta (www.augmenta.ai) And since we’re allowing people to layout space in a very agile way, Hypar plugs into that and asks the user, ‘Would you like this app to asynchronously generate a system for you?’
Now, it might be that, over Hypar’s lifetime, there will be real value in us building those things as well, because most of the work that we’re doing right now is really about the tactility of the experience. So it might be that, to achieve the experience that we want, we have to be the ones who own the generation of those systems as well, but I can’t say yet whether or not that’s the case.
Everything we’re doing right now in terms of the new application is around just building that design experience. What we do in the next six months to one year, vis-à-vis how we connect back into functions that are on the platform and start to expose that capability, I can’t speculate right now.
What we need to do is land this thing in the market and then get enough people interested in using it, so that it starts to take hold. Some of the challenge in doing that is what you alluded to earlier, which is that people are trying to pigeon-hole you. They’ll ask, ‘Are you trying to kill Revit?’, or, ‘Are you trying to kill this part of the process that I currently do in Revit?’ That’s a challenge for all start-ups.
The decision that we made to rebuild the UI is about the long-term vision we have for Hypar. That vision has always been to put the world’s building expertise in the hands of everyone, everywhere. And if you think about that longterm vision, everybody will have access
■ www.hypar.io
to the world’s building expertise. But how do they access it? If it’s through an interface that only the Dynamo and Grasshopper script kids can use or want to use, then we will not have fulfilled our vision.
AEC firms constantly fine-tune their workflows and software estates, seeking productivity improvements. On 11 - 12 June, our annual NXT BLD and NXT DEV conferences will bring together leading AEC firms and software developers to help drive next generation workflows and tools
Planning is already underway for AEC Magazine’s annual, two day, dual-focus conference, NXT BLD (Next Build) and NXT DEV (Next Development), in conjunction with Lenovo workstations. The event will be held on 11 and 12 June 2025 at the prestigious Queen Elizabeth II Conference Centre in London.
Year on year, the NXT experience has grown in reputation, and we now attract design IT directors from multiple continents, together with a plethora of innovative start-ups looking to push the industry forward to next generation workflows and BIM 2.0.
NXT BLD brings innovative industry ideas, in-house development, new workflows and bleeding-edge technology to two conference stages, plus an exciting exhibition. Presentations range from design IT directors sharing insights into their processes to the latest in workstation, AR and VR technology.
NXT DEV addresses the fact that the AEC technologies we use are at a crossroads. The industry is reliant on old software that doesn’t utilise modern processor architectures, while the benefits of combining cloud, database and granular data await with the next generation of tools. AEC professionals can’t leave it to software developers and computer scientists to deliver change and need to help shape what comes next. NXT DEV is a forum for discussion, a great way to meet the startups, venture capitalists (VCs) and fellow design IT directors who are eager to find more productivity and smarter tools.
AEC Magazine is inviting you to come, get inspired and join the discussion. For more info visit www.nxtbld.com and www.nxtdev.build. Early bird tickets will be available soon.
Topics: We are early in the planning stages for the events but you can be sure that we will be talking about BIM 2.0,
Autodrawings, AI, Generative Design, AR and VR, GIS and BIM, Open Source, Rapid Reality Capture, Expert Automation Systems, Digital Fabrication, the future of data and API access.
Talks: There will be inspirational presentations from Heatherwicks, Alain Waha (Buro Happold), Patrick Cozzi (Cesium, now Bentley Systems), Lenovo, Perkins and Will, Augmenta, Finch3D, Ismail Seleit (LoRA and ControlNet AI rendering), Antonio Gonzalez Viegas (ThatOpenCompany), Qonic, Snaptrude, Arcol, Gräbert (Autodrawings), Autodesk, Foster + Partners, and Jonathan Asher (Dassault Systèmes) – to name but a few. More speakers will be announced in the coming weeks, as we shape the two-day NXT 2025 program. The editorial team are looking forward to seeing you there! The two days of NXT offer an intense dive into the future of the industry. Simultaneous stages offer a breadth of topics and areas of interest, plus there’s plenty of exciting new technologies to see on the show floor. You would certainly benefit from bringing a team to ensure you don’t miss anything important.
NXT BLD 2025
Wednesday 11 June 2025
www.nxtbld.com
NXT DEV 2025
Thursday 12 June 2025
www.nxtdev.build
Queen Elizabeth II Centre Westminster, London, UK
Model behaviour
What’s the best CPU, memory and GPU to process complex reality modelling data?
Vs
The integrated GPU comes of age
From desktop to datacentre, could the AMD Ryzen AI Max Pro ‘Strix Halo’ processor change the face of workstations?
JAMES GRAY
Intel vs AMD
Intel Core Ultra vs AMD Ryzen
9000 Series in CAD, BIM, reality modelling, viz and simulation
The AI enigma
Do you need an AI workstation?
+ how to choose a GPU for Stable Diffusion
AI has quickly been woven into our daily workflows, leaving its mark on nearly every industry. For design, engineering, and architecture firms, the direction in which some software developers are heading raises important questions about future workstation investments, writes Greg Corke
You can’t go anywhere these days without getting a big AI smack in the face. From social media feeds to workplace tools, AI is infiltrating nearly every part of our lives, and it’s only going to increase. But what does this mean for design, engineering, and architecture firms? Specifically, how should they plan their workstation investments to prepare for an AI-driven future?
AI is already here
The first thing to point out is if you’re into visualisation — using tools like Enscape, Twinmotion, KeyShot, V-Ray, D5 Render or Solidworks
Visualize, there’s a good chance your workstation is already AI-capable. Modern GPUs, such as Nvidia RTX and AMD Radeon Pro, are packed with special cores designed for AI tasks.
‘‘ Desktop software isn’t going away anytime soon, so firms could end up paying twice – once for the GPUs in their workstations and again for the GPUs in the cloud ’’
Features such as AI denoising, DLSS (Deep Learning Super Sampling), and more are built into many visualisation tools. This means you’re probably already using AI whether you realise it or not.
It’s not just these tools, however. For concept design, text-to-image AI software like Stable Diffusion can run locally on your workstation (see page WS30). Even in reality modelling apps, like Leica Cyclone 3DR, AI-powered features such as autoclassification are now included, requiring a Nvidia CUDA GPU (see page WS34)
Don’t forget Neural Processing Units (NPUs) – new hardware accelerators designed specifically for AI tasks. These are mainly popping up in laptop processors, as they are energy-efficient so can help extend battery life. Right now, NPUs are mostly used for general AI tasks, such as to power AI assistants or to blur
backgrounds during Teams calls, but design software developers are starting to experiment too.
Cloud vs desktop
While AI is making its mark on the desktop, much of its future lies in the cloud. The cloud brings unlimited GPU processing power, which is perfect for handling the massive AI models that are on the horizon. The push for cloud-based development is already in full swing – just ask any software startup in AEC or product development how hard it is to get funded if their software doesn’t run in a browser.
Established players like Dassault Systèmes and Autodesk are also betting big on the cloud. For example, users of CAD software Solidworks can only access new AI features if their data is stored and processed on the Dassault Systèmes 3D Experience Platform. Meanwhile, Autodesk customers will need to upload their data to Autodesk Docs to fully unlock future AI functionality, though some AI inferencing could still be done locally.
While the cloud is essential for some AI workflows, not least because they involve terabytes of centralised data, not every AI calculation needs to be processed off premise. Software developers can choose where to push it. For example, when Graphisoft first launched AI Visualizer, based on Stable Diffusion, the AI processing was done locally on Nvidia GPUs. Given the software worked alongside Archicad, a desktop BIM tool, this made perfect sense. But Graphisoft then chose to shift processing entirely to the cloud, and users must now have a specific license of Archicad to use this feature.
The double-cost dilemma
Desktop software isn’t going away anytime soon. With tools like Revit and Solidworks installed in the millions – plus all the viz tools that work alongside them — workstations with powerful AI-capable GPUs will remain essential for many workflows for years to come. But here’s the issue: firms could end up paying twice — once for the GPUs in their workstations and again for the GPUs in the cloud. Ideally, software developers should give users some flexibility where possible. Adobe provides a great example of this with Photoshop, letting users choose whether to run certain AI features locally or in the cloud. It’s all about what works best for their setup — online or offline. Sure, an entry-level GPU might be slower, but that doesn’t mean you’re stuck with what’s in your machine. With technologies like Z by HP Boost (see page WS32), local workstation resources can even be shared.
But the cloud vs desktop debate is not just about technology. There’s also the issue of intellectual property (IP). Some AEC firms we’ve spoken with won’t touch the cloud for generative AI because of concerns over how their confidential data might be used.
I get why software developers love the cloud — it simplifies everything on a single platform. They don’t have to support a matrix of processors from different vendors. But here’s the problem: that setup leaves perfectly capable AI processors sat idle on the desks of designers, engineers, and architects, when they could be doing the heavy lifting. Sure, only a few AI processes rely on the cloud now, but as capabilities expand, the escalating cost of those GPU hours will inevitably fall on users, either through pay-per-use charges or hidden within new subscription models. At a time when software license costs are already on the rise, adding extra fees to cover AWS or Microsoft Azure expenses would be a bitter pill for customers to swallow.
With the launch of the AMD Ryzen AI Max Pro ‘Strix Halo’ processor, AMD has changed the game for integrated GPUs, delivering graphics performance that should rival that of a mid-range discrete GPU. Greg Corke explores the story behind this brand-new chip and what it might mean for CAD,BIM, viz and more
For years, processors with integrated GPUs (iGPUs) — graphics processing units built into the same silicon as the CPU — have not been considered a serious option for 3D CAD, BIM, and especially visualisation — at least by this publication.
Such processors, predominantly manufactured by Intel, have generally offered just enough graphics performance to enable users to manipulate small 3D models smoothly within the viewport. However, until recently, Intel has not demonstrated anywhere near the same level of commitment to pro graphics driver optimisation and software certification as the established players – Nvidia and AMD.
This gap has limited the appeal of all-in-one-processors for demanding professional workflows, leaving the combination of discrete pro GPU (e.g. Nvidia Quadro / RTX and AMD Radeon Pro) and separate CPU (Intel Core) as the preferred choice of most architects, engineers and designers.
Things started to change in 2023, when AMD introduced the ‘Zen 4’ AMD Ryzen Pro 7000 Series, a family of laptop processors with integrated Radeon GPUs capable of going toe to toe with entry-level discrete GPUs in 3D performance.
What’s more, AMD backed this up with the same pro graphics drivers that it uses for its discrete AMD Radeon Pro GPUs.
The chip family was introduced to the workstation sector by HP and Lenovo in compact, entry-level mobile workstations. In a market long dominated by Intel processors, securing two out of three major workstation OEMs was a major coup for AMD.
In 2024, both OEMs then adopted the slightly improved AMD Ryzen Pro 8000 Series processor and launched new 14-inch mobile workstations – the HP ZBook Firefly G11 A and Lenovo ThinkPad P14s Gen 5 –which we review on pages WS8 and WS9
Both laptops are an excellent choice for 3D CAD and BIM workflows and having tested them extensively, it’s fair to say we’ve been blown away by the
capabilities of the AMD technology.
The flagship AMD Ryzen 9 Pro 8945HS processor with integrated AMD Radeon 780M GPU boasts graphics performance that genuinely rivals that of an entry-level discrete GPU. For instance, in Solidworks 3D CAD software, it smoothly handles a complex 2,000-component motorcycle assembly in “shaded with edges” mode.
However, the AMD Ryzen Pro 8000 Series processor is not just about 3D performance. What truly makes the chip stand out is the ability of the iGPU to access significantly more memory than a typical entry-level discrete GPU. Thanks to AMD’s shared memory architecture — refined over years of developing integrated processors for Xbox and PlayStation gaming consoles — the GPU has direct and fast access to a large, unified pool of system memory.
Up to 16 GB of the processor’s maximum 64 GB can be reserved for the GPU in the BIOS. If memory is tight and you’d rather not allocate as much to the GPU, smaller profiles from 512 MB to 8 GB can be selected. Remarkably, if the GPU runs out of its ringfenced memory, it seamlessly borrows additional system memory if available, temporarily expanding its capacity. Since this memory resides in the same physical location, access remains fast, and real-time performance in 3D CAD and BIM tools typically only drops by a few frames per second, maintaining that all-important smooth experience within the viewport.
In contrast, when a discrete GPU runs out of memory, it can have a big impact on 3D performance. Frame rates can fall dramatically, often making it very hard to re-position a 3D model in the viewport. While a discrete GPU can also ‘borrow’ from system memory, it must access it over the PCIe bus, which is much slower.
All of this means the AMD Ryzen Pro 8000 Series processor can handle certain workflows that simply aren’t possible with an entry-level discrete GPU, especially one with only 4 GB of onboard VRAM.
To put this into a real-world workflow context: with our HP ZBook Firefly G11 A configured with 64 GB of system RAM, Solidworks Visualize was able to grab
the 20 GB of GPU memory it needed to render a complex scene at 8K resolution. What’s even more impressive is that while Solidworks Visualize rendered in the background, we could continue working on the 3D design in Solidworks CAD without disruption.
While the amount of addressable memory makes workflows like these possible, the AMD Radeon 780M GPU does not really have enough graphics horsepower to deliver sufficient frames rates in real-time viz software such as Twinmotion, Enscape, and D5 Render.
For that you need a more powerful GPU, which is exactly what AMD has delivered in its new AMD Ryzen AI Max Pro ‘Strix Halo’ processor, which it announced this month.
The AMD Ryzen AI Max Pro will be available first in HP Z workstations, but unlike the AMD Ryzen Pro 8000 Series processor it’s not just restricted to laptops. In addition to the HP ZBook Ultra G1a mobile, HP has launched a micro desktop, the HP Z2 Mini G1a (see box out on page WS6). Although we haven’t had the chance to test these exciting new chips first hand, our experience with the AMD Ryzen Pro 8000 Series processor and the published specifications of the AMD Ryzen AI Max Pro series give us a very good idea of what to expect.
In the top tier model, the AMD Ryzen AI Max+ Pro 395, the integrated Radeon 8060S GPU is significantly more powerful than the Radeon 780M GPU in the Ryzen 9 Pro 8945HS processor.
It features 40 RDNA 3.5 graphics compute units — more than three times the 12 RDNA 3.0 compute units on offer in the 780M. This should make it capable of handling some relatively demanding workflows for real time visualisation.
But raw graphics performance only tells part of the story. The new Ryzen AI Max Pro platform can support up to 128 GB of 8,000MT/s LPDDR5X memory, and up to 96 GB of this can be allocated exclusively to the GPU. Typically, such vast quantities of GPU memory are only
available in extremely powerful and expensive cloud-based GPUs. It’s the equivalent to the VRAM in two high-end desktop-class workstation GPUs, such as the Nvidia RTX 6000 Ada Generation.
Reports suggest the Ryzen AI Max Pro will rival the graphics performance of an Nvidia RTX 4070 laptop GPU, the consumer equivalent of the Nvidia RTX 3000 Ada Gen workstation laptop GPU.
However, while the Nvidia GPU comes with 8 GB of fixed VRAM, the Radeon 8060S GPU can scale much higher. And this could give AMD an advantage when working with very large models, particularly in real time viewports, or when multitasking.
Of course, while the GPU can access what is, quite frankly, an astonishing amount of memory, there will still be practical limits to the size of visualisation models it can handle. With patience, while you could render massive scenes in the background, don’t expect seamless navigation of these models in the viewport, particularly at high resolutions. For that level of 3D performance, a high-end dedicated GPU will almost certainly still be necessary.
The competitive barriers
software Leica Cyclone 3DR, for example, AI classification is built around the Nvidia CUDA platform (see page WS34).
The good news is AMD is actively collaborating with ISVs to broaden support for AMD GPUs, porting code from Nvidia CUDA to AMD’s HIP framework, and some have already announced support. For example, CAD-focused rendering software, KeyShot Studio, now works with AMD Radeon for GPU rendering, as Henrik Wann Jensen, chief scientist, KeyShot, explains. “We are particularly excited about the substantial frame buffer available on the Ryzen AI Max Pro.” Meanwhile, Altair, a specialist in simulation-driven design, has also announced support for AMD Radeon GPUs on Altair Inspire, including the AMD Ryzen AI Max Pro.
Artificial Intelligence (AI)
These days, no new processor is complete without an AI story, and the AMD Ryzen AI Max Pro is no exception.
First off, the processor features an XDNA2-powered Neural Processing Unit (NPU), capable of dishing out 50 TOPS of AI performance, meeting Microsoft’s requirements for a CoPilot+ PC. This capability is particularly valuable for laptops, where it can accelerate simple AI tasks such as AutoFrame, Background Blur, and virtual backgrounds for video conferencing, more efficiently than a GPU, helping to extend battery life.
While 50 TOPS NPUs are not uncommon, it’s the amount of memory that the NPU and GPU can address that makes the AMD Ryzen AI Max Pro particularly interesting for AI.
‘‘ AMD is pushing the message that users no longer need to rely on a separate CPU and GPU. Could this mark the beginning of a decline in entrylevel to mid-range professional discrete GPUs? ’’
The AMD Ryzen AI Max Pro looks to bring impressive new capabilities, but it doesn’t come without its challenges. In general, AMD GPUs lag behind Nvidia’s when ray tracing, a rendering technique which is becoming increasingly popular in real time arch viz tools.
Additionally, some AEC-focused independent software vendors (ISVs) depend on Nvidia GPUs to accelerate specific features. In reality modelling
AMD isn’t just playing catchup with Nvidia; it’s also paving the way for innovations in software development. According to Rob Jamieson, senior industry alliance manager at AMD, traditional GPU computation often requires duplicating data — one copy in system memory and another in GPU memory — that must stay in sync. AMD’s shared memory architecture changes the game by enabling a ‘zero copy’ approach, where the CPU and GPU can read from and write to a single data source. This approach not only has the potential to boost performance by not having to continually copy data back and forth, but also reduce overall memory footprint, he says.
HP is billing the HP Z2 Mini G1a with AMD Ryzen AI Max Pro processor as the world’s most powerful mini workstation, claiming that it can tackle the same workflows that previously required a much larger desktop workstation. On paper, much of this claim appears to be down to the amount of memory the GPU can address as HP’s Intelbased equivalent, the HP Z2 Mini G9, is limited to low profile GPUs, up to the 20 GB Nvidia RTX 4000 SFF Ada.
The HP Z2 Mini G1a also supports slightly more system
memory than the Intel-based HP Z2 Mini G9 (128 GB vs 96 GB), although some of that memory will need to be allocated to the GPU. System memory in the HP Z2 Mini G1a is also significantly faster (8,000 MT/s vs 5,600 MT/s), which will benefit certain memory intensive workflows in areas including simulation and reality modelling.
While the HP Z2 Mini G9 can support CPUs with a similar number of cores — up to the Intel Core i9-13900K (8 P-cores and 16 E-cores) — our past tests have shown that multi-core frequencies drop considerably under heavy
sustained loads. It will be interesting to see if the energyefficient AMD Ryzen AI Max Pro processor can maintain higher clock speeds across its 16-cores.
Perhaps the most compelling use case of the HP Z2 Mini G1a will be when multiple units are deployed in a rack, as a centralised remote workstation resource.
With the HP Z2 Mini G9, both the power supply and the HP Anyware Remote System Controller, which provides
According to AMD, having access to large amounts of memory allows the processor to handle ‘incredibly large, highprecision AI workloads’, referencing the ability to run a 70-billion parameter large language model (LLM) 2.2 times faster than a 24 GB Nvidia GeForce RTX 4090 GPU.
While edge cases like these show great promise, software compatibility will be a key factor in determining the success of the chip for AI workflows. One can’t deny that Nvidia currently holds a commanding lead in AI software development.
On a more practical level for architects and designers, the chip’s ability to handle large amounts of memory could offer an interesting proposition for AI-driven tools like Stable Diffusion, a text-to-image generator that can be used for ideation at the early stages of design (see page WS30).
remote ‘lights out’ management capabilities, were external. With the new HP Z2 Mini G1a the PSU is now fully integrated in the slightly smaller chassis, which should help increase density and airflow. Five HP Z2 Mini G1a workstations can be placed side by side in a 4U space.
While it’s natural to be drawn to the GPU — being far more powerful than any iGPU that has come before — the AMD Ryzen AI Max Pro doesn’t exactly hold back when it comes to the CPU. Compared to the AMD Ryzen Pro 8000 Series processor, the core count is doubled, boasting up to 16 ‘Zen 5’ cores. This means it should significantly outperform the eight ‘Zen 4’ cores of its predecessor in multi-threaded workflows like rendering.
On top of that, the AMD Ryzen AI Max Pro platform supports much faster memory — 8,000MT/s LPDDR5X compared to DDR5-5600 on the AMD Ryzen Pro 8000 Series — so memory-intensive workflows like simulation and reality modelling should get an additional boost.
Laptop, desktop and datacentre
One of the most interesting aspects of the AMD Ryzen AI Max Pro is that it is being deployed in laptops and micro desktops. This also extends to datacentres as well, as the HP Z2 Mini G1a desktop is designed from the ground up to be rackable.
While the HP Z2 Mini G1a and HP ZBook Ultra G1a use the exact same silicon, which features a configurable Thermal Design Power (cTDP) of 45W – 120W, performance could vary significantly between the two devices. This is down to the amount of power that each workstation can draw.
The power supply in the HP Z2 Mini G1a desktop is rated at 300W—more than twice the 140W of the HP ZBook Ultra G1a laptop. While users shouldn’t notice any difference in single threaded or lightly threaded workflows like CAD or BIM, we expect performance in multi-threaded tasks, and possibly graphics-intensive tasks, to be superior on the desktop unit.
However, that still doesn’t mean the HP Z2 Mini G1a will get the absolute best
out of the processor. It remains to be seen what clock speeds the AMD Ryzen AI Max Pro processor will be able to maintain across its 16-cores, especially in highly multi-threaded workflows like rendering.
The AMD Ryzen AI Max Pro processor has the potential to make a significant impact in the workstation sector. On the desktop, AMD has already disrupted the high-end workstation space with its Threadripper Pro processors, severely impacting sales of Intel Xeon. Now, the company aims to bring this success to mobile and micro desktop workstations, with the promise of significantly improved graphics with buckets of addressable memory.
AMD is pushing the message that users no longer need to rely on a separate CPU and GPU. However, overcoming the long-standing perception that iGPUs are not great for 3D modelling is no small challenge, leaving AMD with significant work to do in educating the market. If AMD succeeds, could this mark the beginning of a decline in entry-level to mid-range professional discrete GPUs?
Much will also depend on cost. Neither AMD nor HP has announced pricing yet, but it stands to reason that a single chip solution should be more cost-effective than having two separate components.
Meanwhile, while the new chip promises impressive performance in all the right areas, that’s only one part of the equation. In the workstation sector, AMD’s greater challenge arguably lies in software. To compete effectively, the company needs to collaborate more closely with select ISVs to enhance compatibility and reduce reliance on Nvidia CUDA. Additionally, optimising its graphics drivers for better performance in certain professional 3D applications remains a critical area for improvement.
HP is touting the HP ZBook
Ultra G1a with AMD Ryzen
AI Max Pro processor as the world’s most powerful 14inch mobile workstation. It offers noteworthy upgrades over other 14-inch models, including double the number of CPU cores, double the system memory, and substantially improved graphics.
When compared to the considerably larger and heavier 16-inch HP ZBook Power
G11 A—equipped with an AMD Ryzen 9 8945HS processor and Nvidia RTX 3000 Ada laptop
GPU—HP claims the HP ZBook
Ultra G1a with an AMD Ryzen AI Max Pro 395 processor and Radeon 8060S GPU, delivers significant performance gains. These include 114% faster CPU rendering in Solidworks and 26% faster graphics performance in Autodesk 3ds Max. The HP ZBook Ultra G1a isn’t just about performance. HP claims it’s the thinnest ZBook ever, just 18.5mm thick and weighing as little as 1.50kg. The HP Vaporforce thermal system incorporates a vapour chamber with large dual turbo fans, expanded rear ventilation, and a newly designed hinge that
AMD is not the only company developing processors with integrated GPUs. Intel has made big strides in recent years, and the knowledge it has gained in graphics hardware and pro graphics drivers from its discrete Intel Arc Pro GPUs is now starting to trickle through to its Intel Core Ultra laptop processors. Elsewhere, Qualcomm’s Snapdragon chips with Armbased CPU cores, have earned praise for their enviable blend of performance and power efficiency. However, there is no indication that any of the major OEMs are considering this chip for workstations and while x86 Windows apps are able to run on Arm-based Windows, ISVs would need to make their apps Arm-native to get the best performance.
Nvidia is also rumoured to be developing an Armbased PC chip, but would face similar challenges to Qualcomm on the software front.
Furthermore, while the Ryzen AI Max Pro is expected to deliver impressive 3D performance in CAD, BIM, and mainstream real-time viz workflows, its ray tracing capabilities may not be as remarkable. And for architecture and product design, ray tracing is arguably more important than it is for games.
Ultimately, the success of the AMD Ryzen AI Max Pro will depend on securing support from the other major workstation OEMs. So far, there’s been no official word from Lenovo or Dell, though Lenovo continues to offer the AMD Ryzen Pro 8000-based ThinkPad P14s Gen 5 (AMD), which is perfect for CAD, and Dell has announced plans to launch AMD-based mobile workstations later this year. AMD seems prepared to play the long game, much like it did with Threadripper Pro, laying the groundwork for future generations of processors with even more powerful integrated GPUs. We look forward to putting the AMD Ryzen AI Max Pro through its paces soon.
improves airflow. According to HP, this design boosts performance while keeping surface temperatures cooler and fan noise quieter.
HP is expecting up to 14 hours of battery life from the HP XLLong Life 4-cell, 74.5 Wh polymer battery. The device is paired with either a 100 W or 140 W USB Type-C slim adapter for charging. For video conferencing, the laptop features a 5 MP IR camera with Poly Camera Pro software. Advanced features like AutoFrame, Spotlight, Background Blur, and virtual backgrounds are all powered
by the 50 TOPS NPU, optimising power efficiency.
Additional highlights include a range of display options, with the top-tier configuration offering a 2,880 x 1,800 OLED panel (400 nits brightness, 100% DCI-P3 colour gamut), HP Onlooker detection that automatically blurs the screen if it detects that someone is peeking over your shoulder, up to 4 TB of NVMe TLC SSD storage, and support for Wi-Fi 7.
This pro laptop is a great all rounder for CAD and BIM, offering an enviable blend of power and portability in a solid, wellbuilt 14-inch chassis, writes Greg Corke
Afew years back, HP decided to simplify its ZBook mobile workstation lineup. With so many different models, and inconsistent product names, it was hard to work out what was what.
HP’s response was to streamline its offerings into four primary product lines: the HP ZBook Firefly (entry-level), ZBook Power (mid-range), ZBook Studio (slimline mid-range), and ZBook Fury (high-end). HP has just added a fifth—the ZBook Ultra—powered by the new AMD Ryzen AI Max Pro processor.
The ZBook Firefly is the starter option, intended for 2D and light 3D workflows, with stripped back specs. Available in both 14-inch and 16-inch variants, customers can choose between Intel or AMD processors. While the Intel Core Ultra-
based ZBook Firefly G11 is typically paired with an Nvidia RTX A500 Laptop GPU, the ZBook Firefly G11 A — featured in this review — comes with an AMD Ryzen 8000 Series ‘Zen 4’ processor with integrated Radeon graphics.
Weighing just 1.41 kg, and with a slim aluminium chassis, the 14-inch ZBook Firefly G11 A is perfect for CAD and BIM onthe-go. But don’t be fooled by its sleek design — this pro laptop is built to perform.
■ AMD Ryzen 9 Pro 8945HS processor (4.0 GHz base, 5.2 GHz max boost) (8-cores) with integrated AMD Radeon 780M GPU
■ 64 GB (2 x 32 GB) DDR5-5600 memory
■ 1 TB, PCIe 4.0 M.2 TLC SSD
■ 14-inch WQXGA (2,560 x 1,600), 120 Hz, IPS, antiglare, 500 nits, 100% DCI-P3, HP DreamColor display
Powered by the flagship AMD Ryzen 9 Pro 8945HS processor, our review unit handled CAD and BIM workflows like a champ, even when working with some relatively large 3D models. The integrated AMD Radeon 780M graphics delivered a smooth viewport in Revit and Solidworks, except with our largest assemblies, but showed its limitations in real-time viz. In Twinmotion, with the mid-sized Snowden Tower Sample project, we recorded a mere 8 FPS at 2,560 x 1,600 resolution. While you wouldn’t ideally want to work like this day in day out, it’s passable if you just want to set up some scenes to render, which it does pretty quickly thanks to its scalable GPU memory (see box out below).
■ 316 x 224 x 19.9 mm (w/d/h)
■ From 1.41 kg
■ Microsoft Windows 11 Pro
■ 1 year (1/1/0) limited warranty includes 1 year of parts and labour. No on-site repair.
■ £1,359 (Ex VAT) CODE: 8T0X5EA#ABU
■ www.hp.com/z
On the CPU side, the frequency in single threaded workflows peaked at 4.84 GHz. In our Revit and Solidworks benchmarks, performance was only between 25% to 53% slower than the current fastest desktop processor, the AMD Ryzen 9 9950X, with the newer ‘Zen 5’ cores. Things were equally impressive in multi-threaded workflows. When rendering in V-Ray, for example,
it delivered 4.1 GHz across its 8 cores, 0.1 GHz above the processor’s base frequency. Amazingly, it maintained this for hours, with minimal fan noise. With a compact 65W USB-C power supply, the laptop is relatively low-power.
The HP DreamColor WQXGA (2,560 x 1,600) 16:10 120Hz IPS display with 500 nits of brightness is a solid option. It delivers super-sharp detail for precise CAD work and good colours for visualisation. There are several alternatives, including a WUXGA (1,920 x 1,200) anti-glare IPS panel, with 100% sRGB coverage and a remarkable 1,000 nits, but no OLED options, as you’ll find in other HP ZBooks and the Lenovo ThinkPad P14s (AMD) . Under the hood, the laptop came with a 1 TB NVMe SSD and 64 GB of DDR5-5600 memory, the maximum capacity of the machine. This is possibly a tiny bit high for mainstream CAD and BIM workflows, but bear in mind some of it needs to be allocated to graphics. Other features include fast Wi-Fi 6E, and an optional 5MP camera with privacy shutter and HP Auto Frame technology that helps keep you in focus during video calls.
There’s much to like about the HP ZBook Firefly G11 A. It’s very cost-effective, especially as it’s currently on offer at £1,359 with 1-year warranty, but there’s nothing cheap about this excellent mobile workstation. It’s extremely well-built, quiet in operation and offers an enviable blend of power and portability. All of this makes it a top pick for users CAD and BIM software, with a sprinkling of viz on the top.
Integrated graphics no longer means designers must compromise on performance. As detailed in our cover story, “The integrated GPU comes of age” (see page WS4), the AMD Ryzen 8000 Series processor impresses. It gives the HP ZBook Firefly 14 G11 A and Lenovo ThinkPad P14s Gen 5 mobile workstations enough graphics horsepower for entry-level CAD and BIM workflows, while also allowing designers, engineers and architects to dip their toes into visualisation. Take a complex motorcycle
assembly in Solidworks CAD software, for example — over 2,000 components, modelled at an engineering level of detail. With the AMD Ryzen 9 Pro 8945HS processor with AMD Radeon 780M integrated graphics our CAD viewport was perfectly smooth in shaded with edges display mode, hitting 31 Frames Per Second (FPS) at FHD resolution and 27 FPS at 4K. Enabling RealView, for realistic materials, shadows, and lighting, dialled back the realtime performance a little, with frame rates dropping to 14–16 FPS. Even though that’s below
the golden 24 FPS, it was still manageable, and repositioning the model felt accurate, with no frustrating overshooting.
The processor’s trump card is the ability of the built in GPU to address lots of memory. Unlike comparative discrete GPUs, which are fixed with 4 GB or 8 GB, the integrated AMD Radeon GPU can be assigned a lot more, taking a portion of system memory. In the BIOS of the HP ZBook Firefly 14 G11 A, one can choose between 512 MB, 8 GB or 16 GB, so long as the laptop has system memory to spare, taken
This 14-inch mobile workstation stands out for its exceptional serviceability featuring several customer-replaceable components, writes Greg Corke
The ThinkPad P14s Gen 5 (AMD) is the thinnest and lightest mobile workstation from Lenovo — 17.71mm thick and starting at 1.31kg. It’s a true 14-incher, smaller than the ThinkPad P14s Gen 5 (Intel), which has a slightly larger 14.5-inch display.
The chassis is quintessential ThinkPad — highly durable, with sturdy hinges and an understated off-black matte finish. The keyboard feels solid, complemented by a multi-touch TrackPad with a pleasingly smooth Mylar surface. True to tradition, it also comes with the ThinkPad-standard TrackPoint with its three-button
from its maximum of 64 GB. 8 GB is sufficient for most CAD workflows, but the 16 GB profile can benefit design visualisation as it allows users to render more complex scenes at higher resolutions than typical entrylevel discrete GPUs. This was demonstrated perfectly in arch viz software Twinmotion from Epic Games. With the mid-sized Snowden Tower Sample project, the AMD Radeon 780M integrated graphics in our HP ZBook Firefly G11 A took 437 secs to render out six 4K images, using up to 21 GB of GPU memory in the
setup. We’ve yet to meet anyone who actually uses this legacy pointing device, but removing it would likely spark outrage among die-hard fans. Meanwhile, the fingerprint reader is seamlessly integrated into the power button for added convenience.
The laptop stands out for its impressive serviceability, allowing the entire device to be disassembled and reassembled using basic tools — just a Phillips head screwdriver is needed to remove back panel.
■ AMD Ryzen 7 Pro 8840HS processor (3.3 GHz base, 5.1 GHz max boost) (6-cores) with integrated AMD Radeon 760M GPU
■ 32 GB (2 x 16 GB) DDR5-5600 memory
■ 512 GB, PCIe 4.0 M.2 SSD
■ 14-inch WUXGA (1,920 x 1,200) IPS display with 400 nits
■ 316 x 224 x 17.7 mm (w/d/h)
■ From 1.31 kg
■ Microsoft Windows 11 Pro
less powerful integrated GPU compared to the flagship 45W AMD Ryzen 9 Pro 8945HS.
The machine performed well in Solidworks (CAD) and Revit (BIM), but unsurprisingly came in second to the HP ZBook Firefly in all our benchmarks. The margins were small, but became more noticeable in multi-threaded workflows, especially rendering. On the plus side, the P14s was slightly quieter under full load.
■ 3 Year Premier Support
■ £1,209 (Ex VAT)
■ www.lenovo.com
Our review unit’s 14-inch WUXGA (1,920 x 1,200) IPS display is a solid, if not stand out option, offering 400 nits of brightness. One alternative is a colour-calibrated 2.8K (2,880 x 1,800) OLED screen — also 400 nits, but with 100% DCI-P3 and 120Hz refresh.
It offers a range of customerreplaceable components, including the battery (39.3Wh or 52.5Wh options), M.2 SSD, and memory DIMMs, which thankfully aren’t soldered onto the motherboard. Beyond that, you can swap out the keyboard, trackpad, speakers, display, webcam, fan/heatsink assembly, and more.
assembly, and more.
The keyboard deserves a special mention need to dismantle the laptop from below.
The keyboard deserves a special mention for its top-loading design, eliminating the need to dismantle the laptop from below. Simply remove two clearly labelled screws from the bottom panel, and the keyboard pops off from the top.
The 5.0 MP webcam with IR and privacy shutter is housed in a slight protrusion at the top of the display. While this design was necessary to accommodate the higher-resolution camera (an upgrade from the Gen 4), it also doubles as a convenient handle when opening the lid.
(8 cores). Both have a Thermal Design Power (TDP) of 28W. Lenovo has chosen
There’s a choice of two AMD Ryzen 8000 Series processors: the Ryzen 5 Pro 8640HS (6 cores) and the Ryzen 7 Pro 8840HS (8 cores). Both have a Thermal Design Power (TDP) of 28W. Lenovo has chosen not to support the more powerful 45W models, likely due to thermal and power considerations. 45W models are available in the HP ZBook Firefly G11 A. Our review unit came with the entry-level Ryzen 5 Pro 8640HS. While capable, it has slightly lower clock speeds, two fewer cores, and a
Additional highlights include up to 96 GB of DDR5-5600 memory, Wi-Fi 6E, a hinged ‘drop jaw’ Gigabit Ethernet port, 2 x USB-A and 2 x USB-C. It comes with a compact 65 W USB-C power supply.
Overall, the ThinkPad P14s Gen 5 stands out as a reliable performer for CAD and BIM, offering an impressive blend of serviceability and thoughtful design.
While capable, it has slightly lower clock speeds, two fewer cores, and a
process (16 GB of dedicated and 5 GB of shared). In contrast, discrete desktop GPUs with only 8 GB of memory, took significantly longer. It seems the Nvidia RTX A1000 (799 secs) and AMD Radeon W7600 (688 secs) both pay a big penalty when they run out of their fixed on-board supply and have to borrow more from system memory over the PCIe bus, which is much slower. Of course, all eyes are on AMD’s new Ryzen AI Max Pro processor. It features significantly improved graphics, and a choice of 6, 8, 12 or 16 ‘Zen 5’ CPU cores — up to twice
as many as the 8 ‘Zen 4’ cores in the AMD Ryzen 8000 Series. However, AMD’s new silicon star in waiting won’t be available until Spring 2025, which is when HP plans to ship the ZBook Ultra G1a mobile workstation. Pricing also remains under wraps.
As we wait to see how AMD’s new chips sit in the market, the HP ZBook Firefly 14 G11 A and Lenovo ThinkPad P14s Gen 5 continue to shine as excellent options for a variety of CAD and BIM workflows — offering impressive performance at very appealing price points.
In an era where manufacturers often prioritise ‘thinner and lighter’ over repairability, it’s great to see Lenovo bucking this trend, a move that is sure to resonate with right-to-repair advocates.
AMD is dominating the high-end workstation market with Threadripper Pro. But how does it fare in the mainstream segment, a traditional stronghold for Intel? Greg Corke pits the AMD Ryzen 9000 Series against the Intel Core Ultra 200S to find out
After years of playing second fiddle, AMD is now giving Intel a serious run for its money. In high-end workstations, AMD Ryzen Threadripper Pro dominates Intel Xeon in most real-world benchmarks. The immensely powerful multi-core processor now plays a starring role in the portfolios of all the major workstation OEMs.
But what about the mainstream workstation market? Here, Intel has managed to maintain its dominance with Intel Core. Despite facing stiff competition from the past few generations of AMD Ryzen processors, none of HP, Dell nor Lenovo have backed AMD’s volume desktop chip with any real conviction.
That’s not the case with specialist workstation manufacturers, however. For some time now, AMD Ryzen has featured strongly in the portfolios of Boxx, Scan, Armari, Puget Systems and others.
But the silicon sector moves fast. Intel and AMD recently launched new mainstream processors — the AMD Ryzen 9000 Series and Intel Core Ultra 200S Series. Both chip families are widely available from specialist workstation manufacturers, which are much more agile when it comes to introducing new tech. We’ve yet to see any AMD Ryzen 9000 or Intel Core Ultra 200S Series
workstations from the major OEMs. However, that’s to be expected as their preferred enterprise-focused variants — AMD Ryzen Pro and Intel Core vPro — have not launched yet.
The AMD Ryzen 9000 Series desktop processors, built on AMD’s ‘Zen 5’ architecture, launched in the second half of 2024 with 6 to 16 cores. AMD continues to use a chiplet-based design, where multiple CCDs (Core Complex Dies) are connected together to form a single, larger processor. The 6 and 8-core models are made from a single CCD, while the 12 and 16-core models comprise two CCDs.
The new Ryzen processors continue to support simultaneous multi-threading (SMT), AMD’s equivalent to Intel HyperThreading, which enables a single physical core to execute multiple threads simultaneously. This can help boost performance in certain multi-threaded workflows, such as ray trace rendering, but it can also slow things down. DDR5 memory is standard, up to a maximum of 192 GB. However, the effective data rate (speed) of the memory, expressed in mega transfers per second (MT/s), can vary dramatically depending on the amount of memory installed in your workstation. For example, you can
currently get up to 96 GB at 5,600 MT/s, but if you configure the workstation with 128 GB, the speed will drop to 3,600 MT/s. Some motherboards can support even faster 8,000 MT/s memory, though this is currently limited to 48 GB.
All Ryzen 9000 Series processors come with integrated GPUs, but their performance is limited, making an add-in GPU essential for professional 3D work. They do not include an integrated neural processing unit (NPU) for AI tasks.
The Ryzen 9000 Series features two classes of processors: the standard Ryzen models, denoted by an X suffix and the Ryzen X3D variants which feature AMD 3D V-Cache technology.
There are four standard Ryzen 9000 Series models. The top-end AMD Ryzen 9 9950X has 16-cores, 32 threads, and a max boost frequency of 5.7 GHz.
The other processors have slightly lower clock speeds and fewer cores but are considerably cheaper. The AMD Ryzen 5 9600X, for example, has six cores and boosts to 5.4 GHz, but is less than half the price of the Ryzen 9 9950X. The full line up can be seen in the table right.
The Ryzen X3D lineup features significantly larger L3 caches than standard Ryzen processors. This increased cache size gives the CPU fast access to more data, instead of having to
fetch the data from slower system memory (RAM). The flagship 16-core AMD Ryzen 9 9950X3D features 128 MB of cache, but the 3D V-Cache is limited to one of its two CCDs.
All the new ‘Zen 5’ Ryzen 9000 chips are more power efficient than the previous generation ‘Zen 4’ Ryzen 7000 Series. This has allowed AMD to reduce the Thermal Design Power (TDP) on a few of the standard Ryzen models. The top-end 16-core processors — the Ryzen 9 9950X and Ryzen 9 9950X3D — both have a TDP of 170W and a peak power of 230W. All the others are rated at 65W or 120W.
Intel Core Ultra 200S “Arrow Lake” Intel Core Ultra marks a departure from Intel’s traditional generational numbering system (e.g., 14th Gen).
But the Intel Core Ultra 200S (codenamed Arrow Lake) is not just an exercise in branding. It marks a major change in the design of its desktop processors, moving to a tiled architecture (Intel’s term for chiplets).
Like 14th Gen Intel Core, the Intel Core Ultra 200S features two different types of cores: Performance-cores (P-cores) for primary tasks and slower Efficient-cores (E-cores) for background processing.
In a bold move, Intel has dropped Hyper-Threading from the design, a feature that was previously supported on the P-cores in 14th Gen Intel Core.
Like AMD, DDR5 memory is standard, with a maximum capacity of 192 GB. However, the data rate doesn’t vary as much depending on the amount installed. For instance, with 64 GB, the speed reaches 5,600 MT/s, while with 128 GB, it only drops slightly to 4,800 MT/s.
The integrated GPU has been improved, but most 3D workflows will still require an add-in GPU. For AI tasks, there’s an integrated NPU, but at 13 TOPS it’s not powerful enough to meet Microsoft’s requirements for Windows Copilot+.
The processor family includes three main models. At the high end, the Intel Core Ultra 9 285K features 8 P-cores and 16 E-cores. The P-cores operate at a base frequency of 3.7 GHz, with a maximum Turbo of 5.7 GHz. It has a base power of 125 W and draws 250 W at peak.
At the entry level, the Intel Core Ultra 5 245K offers 6 P-cores and 8 E-cores, with a base frequency of 4.2 GHz and a max Turbo of 5.2 GHz. It has a base power of 125 W, rising to 159 W under Turbo. The full lineup is detailed on the previous page.
For our testing, we focused on the flagship models from each standard processor
family: the AMD Ryzen 9 9950X (16 cores, 32 threads) and the Intel Core Ultra 9 285K (8 P-cores, 16 E-cores). We also included the AMD Ryzen 7 9800X3D (8 cores, 16 threads) which, at the time, was the most powerful Ryzen 9000 Series chip with 3D V-Cache. At CES a few weeks ago, AMD announced the 12-core Ryzen 9 9900X3D and the 16-core Ryzen 9 9950X3D but these 3D V-Cache processors were not available for testing.
The AMD Ryzen 9 9950X and Intel Core Ultra 9 285K were housed in very similar workstations — both from specialist UK manufacturer, Scan. Apart from the CPUs and motherboards, the other specifications were almost identical.
The AMD Ryzen 7 9800X3D workstation came from Armari. All machines featured different GPUs, but our tests focused on CPU processing, so this shouldn’t impact performance. The full specs can be seen below. Testing was done on Windows 11 Pro 26100 with power plan set to high-performance.
AMD Ryzen 9 9950X
Scan 3XS GWP-A1-R32 workstation
See review on page WS16
• Motherboard: Asus Pro Art B650 Creator
• Memory: 64 GB (2 x 32 GB) Corsair DDR5 (5,600 MT/s)
• GPU: Nvidia RTX 4500 Ada Gen
• Storage: 2TB Corsair MP700 Pro SSD
• Cooling: Corsair Nautilus 360 cooler
• PSU: Corsair RM750e PSU
Intel Core Ultra 9 285K
Scan 3XS GWP-A1-C24 workstation
See review on page WS16
• Motherboard: Asus Prime Z890-P
• Memory: 64 GB (2 x 32 GB) Corsair DDR5 (5,600 MT/s)
• GPU: Nvidia RTX 2000 Ada Gen
• Storage: 2TB Corsair MP700 Pro SSD
• Cooling: Corsair Nautilus 360 cooler
• PSU: Corsair RM750e PSU
AMD Ryzen 7 9800X3D
Armari Magnetar MM16R9 workstation
See review on page WS20
• Motherboard: ASUS ROG Strix AMD B650E-I Gaming WiFi Mini-ITX
• Memory: 96 GB (2 x 48 GB) Corsair Vengeance DDR5-6000C30 EXPO (5,600 MT/s)
• GPU: AMD Radeon Pro W7500
• Storage: 2TB Samsung 990 Pro SSD
• Cooling: Armari SPX-A6815NGR 280mm AIO+NF-P14 redex
• PSU: Thermaltake Toughpower SFX 850W ATX3.0 Gen5
We tested all three workstations with a range of real-world applications used in AEC and product development. Where data existed, and was relevant, we also compared performance figures from older generation processors. This included mainstream models (12th, 13th and 14th Gen Intel Core, AMD Ryzen 7000) and high-end workstation processors (AMD Ryzen 7000 Threadripper and Threadripper Pro, Intel Xeon W-3400, and 4th Gen Intel Xeon Scalable).
Data for AMD Threadripper came from standard and overclocked workstations. In the benchmark charts, 90°C refers to the max temp set in the Armari Magnetar M64T7 ‘Level 1’ PBO (see Workstation Special Report 2024 - tinyurl.com/WSR24), while 900W refers to power draw of the processor in the Comino Grando workstation (see page WS22)
The comparisons aren’t entirely applesto-apples — older machines were tested with different versions of Windows 11, as well as varying memory, storage, and cooling configurations. However, the results should still provide a solid approximation of relative performance.
Dassault Systèmes Solidworks (CAD) and Autodesk Revit (BIM) are bread and butter tools for designers, engineers, and architects. For the most part, these applications are single-threaded, although some processes are able to utilise a few CPU cores. Ray-trace rendering stands out as the exception, taking full advantage of all available cores.
In the Autodesk Revit 2025 RFO v3 benchmark the AMD Ryzen 9 9950X came out top in the model creation and export tests, in which Intel has traditionally held an edge. The AMD Ryzen 7 9800X3D performed respectably, but with its slightly lower maximum frequency, lagged behind a little.
In Solidworks 2022, things were much more even. In the rebuild, convert, and simulate subtests of the SPECapc benchmark, there was little difference between the AMD Ryzen 9 9950X and the Intel Core Ultra 9 285K. However, in the mass properties and boolean subtests, the Ryzen 9 9950X pulled ahead, only to be outshined by the Ryzen 7 9800X3D. Despite the 9800X3D having a lower clock speed, it looks like the additional cache provides a significant performance boost.
But how do the new chips compare to older generation processors? Our data shows that while there are improvements, the performance gains are not huge.
AMD’s performance increases ranged
‘‘
AMD’s cache-rich Ryzen 9000 X3D variants look particularly appealing for select workflows where having superfast access to a large pool of frequently used data makes them shine ’’
from 7% to 22% generation-on-generation, although the Ryzen 9 9950X was 9% slower in the mass properties test. Intel’s improvements were more modest, with a maximum gain of just 9%. In fact, in three tests, the Intel Core Ultra 9 285K was up to 19% slower than its predecessor.
Looking back over the last three years, Intel’s progress appears incremental. Compared to the Intel Core i9-12900K, launched in late 2021, the Intel Core Ultra 9 285K is only up to 26% faster.
Ray trace rendering Ray trace rendering is exceptionally multithreaded, so can take full advantage of all CPU cores. Unsurprisingly, the processors with the highest core counts — the AMD Ryzen 9 9950X (16 cores) and Intel Core Ultra 9 285K (24 cores) — topped our tests.
The Ryzen 9 9950X outperformed the Intel Core Ultra 9 285K in several benchmarks, delivering faster performance in V-Ray (17%), CoronaRender (15%), and KeyShot (11%). Intel’s decision to drop Hyper-Threading may have contributed to this performance gap, though Intel still claimed a slight lead in Cinebench, with a 5% advantage.
Gen-on-gen improvements were modest. Intel showed gains of 4% to 17%, while AMD delivered between 5% and 11% faster performance.
We also ran stress tests to assess sustained performance. In several hours of rendering in V-Ray, the Ryzen 9 9950X held steady at 4.91 GHz, while the Ryzen 9 9800X3D maintained 5.17 GHz. Meanwhile, the P-cores of the Intel Core Ultra 9 285K reached 4.86 GHz.
Power consumption is another important consideration. The Ryzen 9 9950X drew 200W, whereas the Intel Core Ultra 9 285K peaked at 240W — slightly lower than its predecessor, 14th Gen Intel Core.
Since rendering scales exceptionally well with higher core counts, the best performance is achieved with high-end workstation processors like AMD Ryzen Threadripper Pro.
Simulation (FEA and CFD)
Engineering simulation encompasses Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD),
both of which are extremely demanding computationally.
FEA and CFD utilise a variety of solvers, each with unique behaviours, and performance can vary depending on the dataset. Generally, CFD scales well with additional CPU cores, allowing studies to solve significantly faster. Moreover, CFD performance benefits greatly from higher memory bandwidth, making these factors critical for optimal results.
For our testing, we selected three workloads from the SPECworkstation 3.1 benchmark and one from SPECworkstation 4.0. The CFD tests included Rodinia (representing compressible flow), WPCcfd (modelling combustion and turbulence), and OpenFoam with XiFoam solver. For FEA, we used CalculiX, which simulates the internal temperature of a jet engine turbine.
The Intel Core Ultra 9 285K claimed the top spot in all the tests. The AMD Ryzen 9 9950X followed in second place, except in the OpenFoam benchmark, where it was outperformed by the Ryzen 9 9800X3D— likely due to the additional cache.
Of course, for those deeply invested in simulation, high-end workstation processors, such as AMD Ryzen Threadripper Pro and Intel Xeon offer a significant advantage, thanks to their higher core counts and superior memory bandwidth. For a deeper dive, check out last year’s workstation special report: www.tinyurl.com/WSR24.
Reality modelling is becoming prevalent in the AEC sector. Raw data captured by drones (photographs / video) and terrestrial laser scanners must be turned in point clouds and reality meshes — a process that is very computationally intensive.
We tested a range of workflows using three popular tools: Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture by Capturing Reality, a subsidiary of Epic Games.
As many of the workflows in these applications are multi-threaded, we were surprised that the 8-core AMD Ryzen 9800X3D outperformed the 16-core AMD Ryzen 9950X and 24-core Intel Core Ultra 9 285K in several tests. This is likely due to its significantly larger cache, but possibly down to its single CCD
design, which houses all 8 CPU cores.
In contrast, the 16-core AMD Ryzen 9950X, which is made up of two 8-core CCDs, may suffer from latency when cores from different CCDs need to communicate with each other. It will be interesting to see how the recently announced 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D compare.
The other point worth noting is the impact of memory speed. In some workflows we experienced dramatically faster computation with faster memory. Simultaneous multi-threading (SMT) also had an impact on performance.
We explore reality modelling in much more detail on page WS33, where you will also find all the benchmark results.
For the past few years, Intel and AMD have been battling it out in the mainstream processor market. Intel has traditionally dominated single threaded and lightly threaded workflows like CAD, BIM, and reality modelling, while AMD has been the go-to choice for multithreaded rendering.
But the landscape is shifting. With the ‘Zen 5’ AMD Ryzen 9000 Series, AMD is starting to take the lead in areas where Intel once ruled supreme. For instance, in Solidworks CAD, AMD is delivering solid generation-on-generation performance improvements, while Intel appears to be stagnating. In fact, some workflows show the Intel Core Ultra 200S trailing behind older 14th Gen Intel Core processors.
That said, for most workstation users, AMD’s rising stock won’t mean much unless major OEMs like Dell, HP, and Lenovo start giving Ryzen the same level of attention they’ve devoted to AMD Ryzen Threadripper Pro. A lot will depend on AMD releasing Pro variants of the Ryzen 9000 Series to meet the needs of enterprise users.
For everyone else relying on specialist manufacturers, workstations with the latest Intel and AMD chips are already available. This includes AMD’s cacherich Ryzen 9000 X3D variants, which look particularly appealing for select workflows where having superfast access to a large pool of frequently used data makes them shine.
GWP-A1-C24 & GWP-A1-R32
Between these two attractive desktops, Scan has most bases covered in AEC and product development, from CAD/BIM and visualisation to simulation, reality modelling and beyond, writes Greg Corke
Specialist workstation manufacturers like Scan often stand out from the major OEMs, as they offer the very latest desktop processors. The Scan 3XS GWP-A1-C24 features the new “Arrow Lake” Intel Core Ultra 200S Series (with the C in the model name standing for Core) while the Scan 3XS GWP-A1-R32 offers the ‘Zen 5’ AMD Ryzen 9000 Series (R for Ryzen). In contrast, Dell, HP, and Lenovo currently rely on older 14th Gen Intel Core processors, while their AMD options are mostly limited to the high-end Ryzen Threadripper Pro 7000 Series.
Both Intel and AMD machines share several Corsair branded components, including 64 GB (2 x 32GB) of Corsair Vengeance DDR5 5600 memory, a 2TB Corsair MP700 Pro SSD, a Corsair Nautilus 360 cooler, and Corsair RM750e PSU.
The 2TB NVMe SSD delivers blazingly-fast read and write speeds combined with solid endurance. In CrystalDiskMark it delivered 12,390 MB/sec sequential read and 11,723 MB/sec sequential. Its endurance makes it wellsuited for intensive read / write workflows, such as reality modelling. Corsair backs this up with a five-year warranty or a rated lifespan of 1,400 total terabytes written (TBW), whichever comes first.
■ Intel Core Ultra 9 285K processor
(3.7 GHz, 5.7 GHz boost) (24 cores - 8 P-cores + 16 E-cores)
■ Nvidia RTX 2000
Ada Generation GPU (16 GB)
■ 64 GB (2 x 32 GB)
Corsair Vengeance DDR5 5,600 memory
■ 2TB Corsair MP700 Pro SSD
■ Asus Prime Z890-P motherboard
■ Corsair Nautilus 360 cooler
■ Corsair RM750e Power Supply Unit
■ Fractal North Charcoal Mesh case (215 x 469 x 447mm)
■ Microsoft Windows 11 Pro 64-bit
■ 3 Years warranty –1st Year Onsite, 2nd and 3rd Year RTB (Parts and Labour)
Ada Generation. This hardware pairing is well-suited to CAD, BIM, and entry-level viz workflows, as well as CPUintensive tasks like point cloud processing, photogrammetry, and simulation.
The downside of the chassis is that it’s relatively large, measuring 215 x 469 x 447mm (W x H x D). However, this spacious design makes accessing internal components incredibly easy, a convenience further enhanced by Scan’s excellent trademark cable management.
The all-in-one (AIO) liquid CPU cooler features a 360mm radiator, bolted on to the top of the chassis. Cooled by three low-duty RS120 fans both machines run cool, and remain very quiet, even when rendering for hours.
■ £2,350 (Ex VAT)
■ scan.co.uk/3xs
The Nvidia RTX 2000 Ada Generation is a compact, low-profile, dual-slot GPU featuring four mini DisplayPort connectors. With a conservative power rating of 70W, it gets all its power directly from the Asus Prime Z890-P motherboard’s PCIe slot. Despite its modest power requirements, it delivered impressive graphics performance in CAD and BIM, easily handling all our 3D modelling tests in Solidworks and Revit. 16 GB of onboard memory allows it to work with fairly large visualisation datasets as well.
Our Intel-based Scan 3XS GWP-A1-C24 workstation was equipped with a top-end Intel Core Ultra 9 285K CPU and an entrylevel workstation GPU, the Nvidia RTX
In real-time visualisation software, don’t expect silky smooth navigation with large models at high resolutions. However, 3D performance is still acceptable. In Chaos Enscape, for example, we got 14 frames per second (FPS) at 4K with our demanding school project test scene.
From the exterior, both Scan workstations share the same sleek design, housed in the Fractal North Charcoal Mesh case with dark walnut wood strips on the front. While wood accents in PC cases can sometimes feel contrived, this ATX Mid-Tower strikes an excellent balance between form and function. Its elegant, minimalist aesthetic enhances the overall visual appeal without compromising airflow. Behind the wooden façade, an integrated mesh ensures efficient ventilation, with air drawn in through the front and expelled through the rear and top. Adding to its refined look, the case has understated brass buttons and ports on the top, including two USB 3.0 Type-A, one USB 3.1 Gen2 Type-C, as well as power button, mic, and HD audio ports.
renders in 1,100 seconds, just under twice as long
Outputting ray trace renders in KeyShot, V-Ray and Twinmotion was noticeably slower compared to more powerful Nvidia RTX GPUs. That said, it’s still a viable solution if you’re willing to wait. In Twinmotion, for example it cranked out five 4K path traced renders in 1,100 seconds, just under twice as long as it took the Nvidia RTX 4500 Ada Generation in Scan’s Ryzen-based workstation. In CPU workflows, the Intel Core Ultra 9 285K CPU delivered mixed results. While it outperformed the AMD Ryzen 9 9950X in a few specific workflows (as detailed in our indepth article on page WS10), the performance gains over 14th Gen Intel Core processors, which launched in Q4 2023, were relatively minor. In fact, in some workflows, it even lagged behind Intel’s previous generation
4500 Ada Generation
depth article on WS10 gains over 14th Gen minor. In fact, in some
flagship mainstream CPU, the Intel Core i9-14900K.
One advantage that Scan’s Intel workstation holds over its AMD counterpart is in memory performance. Both machines were configured with 64 GB of DDR5 RAM running at 5,600 MT/s. However, when memory is increased to 128 GB, filling all four DIMM slots, the memory clock speed must be reduced to keep everything stable. On the Intel system, it only drops a little, down to 4,800 MT/s, but on the AMD system, it’s much more significant, falling to 3,600 MT/s. This reduction can have a notable impact on performance in memoryintensive tasks like simulation and reality modelling, giving the Intel system an edge when working with large datasets in select workflows.
Our AMD-based Scan 3XS GWPA1-R32 workstation is set up more for visualisation, with an Nvidia RTX 4500 Ada Generation GPU (24 GB) paired with the top-end AMD Ryzen 9 9950X CPU.
The full length double height Nvidia GPU is rated at 210W, so must draw some of its power directly from the 750W power supply unit (PSU). It comes with four DisplayPort connectors.
The RTX 4500 Ada marks a big step up from the RTX 2000 Ada. In real-time viz software Enscape we got double the frame
rates at 4K resolution (28.70 FPS), and more than double the performance in most of our ray trace rendering tests. With 50% more on-board memory, you also get more headroom for larger viz datasets.
The CPU performance of the system was equally impressive. While the previous generation Ryzen 7000 Series enjoyed a lead over its Intel equivalent in multi-threaded ray tracing, it lagged behind in single threaded workflows. But with the Ryzen 9000 Series that’s no longer the case. AMD has significantly improved single threaded performance gen-on-gen, while Intel’s performance has stagnated a little. It means AMD is now sometimes the preferred option in a wider variety of workflows.
But the Scan 3XS GWP-A1-R32 is not without fault. In select reality modelling workflows, it was significantly slower than its Intel counterpart. We expect this is down to its dual chiplet (CCD) design, something we explore in more detail on page WS10
Also, as mentioned earlier, those that need more system memory will have to accept significantly slower memory speeds on AMD than with Intel. This can impact performance dramatically. When aligning images in Capturing Reality, for instance, going from 64 GB (5,600 MT/s) to 128 (3,600 MT/s) on the AMD workstation, saw computation times increase by as much as 64%. And
in simulation software, OpenFoam CFD, performance dropped by 31%.
Both Scan 3XS workstations are impressive desktops, offering excellent performance housed in aesthetically pleasing chassis. The choice between Intel and AMD depends on the specific demands of your workflows.
In terms of CAD and BIM, performance is similar across both platforms, as shown in our benchmark charts on page WS25 For visualisation, AMD holds a slight edge, but this may not be a deciding factor if your visualisation tasks rely more on GPU computation rather than CPU computation.
When it comes to reality modelling, Intel may not always have the lead, but it offers more consistent performance across various tasks. Additionally, Intel’s support for faster memory at larger capacities could make a significant difference. With 128 GB, Intel can achieve noticeably faster memory speeds, which translates into potential performance gains in certain workflows.
Ultimately, both machines are fully customisable, allowing you to select the components that best match your specific needs. Whether you prioritise raw processing power, memory speed, or GPU performance, Scan offers flexibility to tailor the workstation to your requirements.
This compact desktop with liquidcooled ‘Zen 5’ AMD Ryzen 9000 Series processor and Nvidia RTX 5000 Ada Generation GPU is a powerhouse for design viz, writes Greg Corke
In the world of workstations, Boxx is somewhat unique. Through its extensive reseller channel, it has the global reach of a major workstation OEM, but the technical agility of a specialist manufacturer.
Liquid cooling is standard across many of its workstations, and you can always expect to see the latest processors soon after launch. And there’s a tonne to choose from. In addition to workstation staples like Intel Core, Intel Xeon, AMD Ryzen Threadripper Pro, and (to a lesser extent) AMD Ryzen, Boxx goes one step further with AMD Epyc, a dual socket processor typically reserved for servers. The company also stands out for its diverse range of workstation form factors, including desktops, rack-mounted systems, and high-density datacentre solutions.
Boxx played a key role in AMD’s revival in the workstation market, debuting the AMD Ryzen-powered Apexx A3 in 2019.
The latest version of this desktop workstation may look identical on the outside, but inside, the new ‘Zen 5’ AMD Ryzen 9000 Series chip is a different beast entirely. 2019’s ‘Zen 2’ AMD Ryzen 3000 Series stood out for its multithreaded performance but fell short of Intel in single-threaded tasks critical for CAD and BIM. Now, as we explore in our ‘Intel vs. AMD’ article on page WS10 , AMD has the edge in a much broader range of workflows.
Ryzen 9000-based workstation in this report — the Scan 3XS GWP-A1-R32 - which we review on page WS16
The chassis offers several practical features. The front mesh panel easily clips off, providing access to a customerreplaceable filter. The front I/O panel is angled upward for convenient access to the two USB 3.2 Gen 2 (Type-A) ports and one USB 3.2 Gen 2 (Type-C) port. Around the back, you’ll find an array of additional ports, including two USB 4.0 (Type-C), three USB 3.2 Gen 1 (Type-A), and five USB 3.2 Gen 2 (Type-A).
For connectivity, there’s fast 802.11ab Wi-Fi 7 with rearmounted antennas, although most users — particularly those working with data from a central server — are likely to utilise the 5 Gigabit Ethernet LAN for maximum speed and reliability.
■ AMD Ryzen 9 9950X processor (4.3 GHz, 5.7 GHz boost) (16-cores, 32 threads)
■ 96 GB (2 x 48 GB) Crucial DDR5 memory (5,600 MT/s)
■ 2TB Crucial T705 NVMe PCIe 5.0 SSD
■ Asrock X870E Taichi motherboard
■ Nvidia RTX 5000 Ada Generation GPU (32 GB)
■ Asetek 624T-M2 240mm All-in-One liquid cooler
■ Boxx Apexx A3 case (174 x 388 x 452mm)
■ Microsoft Windows 11 Pro
■ 3 Year standard warranty
■ USD $8,918 (starting at $3,655)
■ www.boxx.com www.boxx-tech.co.uk
The chassis layout is different to most other workstations of this type, with the motherboard flipped through 180 degrees, leaving the rear I/O ports at the bottom and the GPUs at the top — upside down.
To save space, the power supply sits almost directly in front of the CPU. This wouldn’t be possible in an air-cooled system, because the heat sink would get in the way. But with the Boxx Apexx A3, the CPU is liquid cooled, and the compact All-in-one (AIO) Asetek closed loop cooler draws heat away to a 240mm radiator, located at the front of the machine.
The Boxx Apexx A3 is crafted from aircraft-grade aluminium, delivering a level of strength that surpasses off-theshelf cases used by many custom manufacturers. Considering it can host up to two high-end GPUs, it’s surprisingly compact, coming in at 174 x 388 x 452mm, significantly smaller than the other AMD
Our test machine came with the 16core AMD Ryzen 9 9950X, the flagship model in the standard Ryzen 9000 Series. Partnered with the massively powerful Nvidia RTX 5000 Ada Generation GPU, this workstation screams design visualisation. And it has some serious clout.
Our test machine’s focus on GPU computation means the AMD Ryzen 9 9950X’s 16 cores may spend a good amount of time under utilised. Opting for a CPU with fewer cores could save you some cash, though it would come with a slight reduction in single-core frequency.
As it stands, the system delivers impressive CPU benchmark scores across CAD, BIM, ray-trace rendering, and reality modelling. However, in some tests, it was narrowly outperformed by the 3XS GWPA1-R32 and when pushing all 16 cores to their limits in V-Ray, fan noise was a little bit more noticeable (although certainly not loud).
Boxx configured our test machine with 96 GB of Crucial DDR5 memory, carefully chosen to deliver the maximum capacity with the fastest performance. With two 48 GB modules, it can run at 5,600 MT/s. Anything above that, up to a maximum of 192 GB, would see speeds drop significantly.
Rounding out the specs is a 2TB Crucial T705 SSD, the fastest PCIe 5.0 drive we’ve tested. It delivered exceptional sequential read/write speeds in CrystalDiskMark, clocking in at an impressive 14,506 MB/s read and 12,573 MB/s write — outpacing the Corsair MP700 Pro in the Scan 3XS workstation. However, it’s rated for 1,200 total terabytes written (TBW), giving it slightly lower endurance.
The Asrock X870E Taichi motherboard includes room for a second SSD, while the chassis features two hard disk drive (HDD) cradles at the top. However, with modern SSDs offering outstanding price, performance, these cradles are likely to remain empty for most users.
In Twinmotion it delivered five 4K path traced renders in a mere 342 seconds and in Lumion four FHD ray trace renders in 70 seconds. That’s more than three times quicker than an Nvidia RTX 2000 Ada. And with 32 GB of onboard memory to play with, the GPU can handle very complex scenes.
The Boxx Apexx A3 is a top-tier compact workstation, fully customisable and built to order, allowing users to select the perfect combination of processors to meet their needs. Among specialist system builders, Boxx is probably the closest competitor to the major workstation OEMs like Dell, HP, and Lenovo. However, none of these major players have yet released an AMD Ryzen 9000-based workstation — and given past trends, there’s no guarantee they will. This gives Boxx a particular appeal, especially for companies seeking a globally available product powered by the latest ‘Zen 5’ AMD Ryzen processors.
Designed for AI – training, fine tuning, inference, deep learning and more
Boosted in Performance by up to 50% –outperform standard air-cooled machines
Reliable in Operation within premises up to 40°C – stays cool and quiet under demanding conditions
Unique Configurations – Scale up to 8 high-end GPUs (NVIDIA RTX 6000 ADA, H200, RTX 5090)
Optimized with leading AI frameworks and inference tools – Stable Diffusion, Llama, Mid Journey, Hugging Face, PyTorch, TensorFlow, Character.AI, QuillBot, DALLE and more
Engineering as Art
Meticulously selected and engineered components maximize longevity and performance
Controller – the System’s Core Independent, autonomous monitoring ensures constant oversight and stability
Full-Cover Comino CPU Water Block
Cools both CPU and power circuitry for peak performance
Single-Slot Comino GPU Water Blocks
Uniquely designed for top efficiency and a dense compute
API Integration
Compatible with modern monitoring tools like Grafana and Zabbix
Comprehensive Sensors
Track temperatures, airflow, coolant level, flow and more for precise analysis
Compact, Modular & Easily Serviced Chassis
Quick access for minimal downtime
* GRANDO systems are compatible with EPYC, Threadrippiper, Xeon and Xeon W CPUs, NVIDIA RTX A6000, A40, RTX 6000 ADA, L40S, A100, H100, H200, RTX 3090, RTX 4090, RTX 5090, AMD Radeon PRO W7900, Radeon 7900XTX GPUs. ** Server equipped with redundant power supply system for 24/7 stable operation.
This compact desktop workstation, built around the gamer-favourite Ryzen X3D processor, is also a near perfect fit for reality modelling, writes Greg Corke
The first AMD Ryzen processor to feature AMD 3D V-Cache technology launched in 2022. Since then, newer versions have become the processors of choice for hardcore gamers. This is largely thanks to the additional cache — a superfast type of memory connected directly to the CPU — which can dramatically boost performance in certain 3D games. As we discovered in our 2023 review of the ‘Zen 4’ AMD Ryzen 9 7950X3D, that applies to some professional workflows too.
With the launch of the ‘Zen 5’ AMD Ryzen 9000 Series, AMD has opted for a staggered release of its X3D variants. The 8-core AMD Ryzen 7 9800X3D was first out the blocks in November 2024. Now the 12-core AMD Ryzen 9 9900X3D and 16-core AMD Ryzen 9 9950X3D have just been announced and should be available soon.
fans spin up during all-core tasks like rendering in V-Ray, the noise is perfectly acceptable for an office environment.
But this is not a workstation you’d buy for visualisation or in indeed CAD or BIM. For those workflows, the non-X3D AMD Ryzen 9000 Series processors would be a better fit, and are also available as options for this machine. For instance, the 16-core AMD Ryzen 9 9950 has a significantly higher singlecore frequency to accelerate CAD, and double the number of cores to cut render times in half.
The X3D chips shine in tasks that benefit from fast access to large amounts of cache. As we detail in our dedicated article on page WS34 , reality modelling is one such workflow. In fact, in many scenarios, Armari’s compact desktop workstation not only outperformed the 16-core AMD Ryzen 9 9950 processor but the 96-core AMD Ryzen Threadripper Pro 7995WX as well.
■ AMD Ryzen 7 9800X3D processor (4.7 GHz, 5.2 GHz boost) (8-cores, 16 threads)
■ 96 GB (2 x 48 GB) Corsair Vengeance DDR5-6000C30 EXPO memory (5,600 MT/s)
■ 2TB Samsung 990 Pro M.2 NVMe PCIe4.0 SSD
■ ASUS ROG Strix AMD B650E-I Gaming Wifi Mini-ITX Motherboard
■ AMD Radeon Pro W7500 GPU (8 GB)
■ Armari SPXA6815NGR 280mm AIO+NF-P14 redex CPU Cooler
■ Coolermaster MasterBox NR200P Mini ITX case (376 x 185 x 292mm)
■ Microsoft Windows 11 Pro
■ Armari 3 Year basic warranty
■ £1,999 (Ex VAT)
■ www.armari.com
anything above 96 GB requires the memory speed to be lowered to 3,600 MT/s. This reduction can lead to noticeable performance drops in some memory-intensive reality modelling workflows.
Armari, true to form, is continually looking for ways to improve performance. Just before we finalised this review, the company sent an updated machine with 48 GB (2 x 24 GB) of faster 8,000 MT/s G.Skill Tri Z5 Royal Neo DDR5 memory, paired with the newer Asus AMD ROG Strix B850-I ITX motherboard.
UK manufacturer Armari has been a long term advocate of AMD Ryzen processors and has now built a brand-new workstation featuring the AMD Ryzen 9800X3D. With a 120W TDP, rising to 162W under heavy loads, it’s relatively easy to keep cool. This allows Armari to fit the chip into a compact Coolermaster MasterBox NR200P Mini ITX case, which saves valuable desk space. Even though the components are crammed in a little, the 280mm AIO CPU cooler ensures the system runs quiet. While the
However, the workstation is not quite the perfect match for mainstream reality modelling. While the AMD Radeon Pro W7500 GPU is great for CAD, it’s incompatible with select workflows in Leica Cyclone 3DR and RealityCapture from Epic Gamesthose accelerated by Nvidia CUDA. Here, the Nvidia RTX A1000, an equivalent 8 GB GPU, would be the better option.
not quite the perfect match for mainstream reality the those Here, 8 (2 x 48
In our tests, this new setup provided a slight (1-2%) performance boost in some reality modelling tasks. However, since our most demanding test requires 60 GB of system memory and 48 GB is the current maximum capacity for this memory speed, it’s hard to fully gauge its potential. For the time-being, the higherspeed memory feels like a step toward future improvements, pending the release of larger-capacity kits.
Having more cache probably isn’t the only reason why the 9800X3D procesor excels. Because the chip is made from a single CCD, there’s less latency between cores. We delve into this further in our reality modelling article on page WS34. It will be fascinating to see how the 12-core and 16-core X3D chips compare.
The test machine came with 96 GB (2 x 48 GB) of Corsair Vengeance DDR5-6000C30 Expo memory, running at 5,600 MT/s. While the system supports up to 192 GB,
If we were to look for faults, it would be that the machine’s top panel connections are USB-A only, which is too slow to transfer terabytes of reality capture data quickly, but Armari tells us that production systems will have a front USB-C Gen 2x2 port.
Overall, Armari has done it again with another outstanding workstation. It’s not just powerful — it’s compact and portable as well — which could be a big draw for construction firms that need to process reality data while still on site.
‘‘
The Armari Magnetar MM16R9 is not just powerful — it’s compact and portable — which could be a big draw for construction firms that need to process reality data on site
This desktop behemoth blurs the boundaries between workstation and server and, with an innovative liquid cooling system, delivers performance like no other, writes Greg Corke
Firing up a Comino Grando feels more like prepping for take-off than powering on a typical desktop workstation. Pressing both front buttons activates the bespoke liquid cooling system, which then runs a series of checks, before booting into Windows or Linux.
The cooling system is an impressive feat of precision engineering. Comino manufactures its own high-performance water blocks out of copper and stainless steel. And these are not just for the CPU. Unlike most liquid cooled workstations, the Comino Grando takes care of the GPUs and motherboard VRMs as well. It’s only the system memory, and storage that are cooled by air in the traditional way.
Not surprisingly, this workstation is all about ultimate performance. This is exemplified by the 96-core AMD Threadripper Pro 7995WX processor, which Comino pushes to the extreme. While most air-cooled Threadripper Pro workstations keep the processor at its stock 350W, Comino cranks it up to an astonishing 900W+, with the CPU settling around 800W during sustained multi-core workloads. That’s a lot of electricity to burn.
The result, however, is truly astonishing all-core frequencies. During rendering in Chaos V-Ray, the 96-core chip initially hit an incredible 4.80 GHz, before landing on a still-impressive 4.50 GHz. Even some workstations with fewer cores struggle to
maintain these all core speeds.
Not surprisingly, the test scores were off the chart. In the V-Ray 5.0 benchmark, it delivered an astonishing score of 145,785 — a massive 42% faster than an air-cooled Lenovo ThinkStation P8, with the same 96-core processor.
The machine also delivered outstanding results in our simulation benchmarks. Outside of dual Intel Xeon Platinum workstations — which Comino also offers — it’s hard to imagine anything else coming close to its performance.
As you might expect, running a machine like this generates some serious heat. Forget portable heaters — rendering genuinely became the best way to warm up my office on a chilly winter morning.
While the CPU delivers a significant performance boost, the liquid cooled GPUs run at standard speeds. Comino replaces the original air coolers with a slim water block, a complex process that’s explained well in this video (www.tinyurl.com/Comino-RTX)
■ AMD Ryzen Threadripper Pro 7995WX processor (2.5 GHz, 5.1 GHz boost) (96-cores, 192 threads)
■ 256 GB (8 x 32 GB) Kingston RDIMM DDR5 6400Mhz CL32 REG ECC memory
■ 2TB Gigabyte
Aorus M.2 NVMe 2280 (PCIe 4.0) SSD
■ Asus Pro WS WRX90E-SAGE motherboard
■ 2 x Nvidia RTX 6000 Ada Gen GPU (48 GB)
■ Comino custom liquid cooling system
■ Comino Grando workstation chassis 439 x 681 x 177mm)
■ Microsoft Windows 11 Pro
■ 2-year warranty (upgradable to up to 5 years with on-site support)
■ £31,515 (Ex VAT)
4 x 4TB M.2 SSD RAID 0 upgrade
■ £33,515 (Ex VAT) With 2 x AMD Radeon
■ £24,460 (Ex VAT)
■ www.grando.ai
This design allows each GPU to occupy just a single PCIe slot on the motherboard, compared to the two or three slots required by the same high-end GPU in a typical workstation. Normally, modifying a GPU like this would void the manufacturer’s warranty. However, Comino offers a full two years, covering the entire workstation, with the option to extend up to five.
The machine can accommodate up to seven GPUs — though these are limited to mid-range models. For high-end professional GPUs, support is capped at four cards, although Comino offers a similar server with more power and
Product spec 1
noisier fans that can host more. Options include the Nvidia RTX 6000 Ada Generation (48 GB), Nvidia L40S (48 GB), Nvidia H100 (80 GB), Nvidia A100 (80 GB), and AMD Radeon Pro W7900 (48 GB). Keen observers will notice many of these GPUs are designed for compute workloads, such as engineering simulation and AI. Most notably, a few are passively cooled, designed for datacentre servers, so are not available in traditional workstations.
For consumer GPUs, the system can handle up to two cards, such as the Nvidia GeForce RTX 4090 (24 GB) and AMD Radeon 7900 XTX (24 GB). Comino is also working on a solution for 2 x Nvidia H200 (141 GB) or 2 x Nvidia GeForce RTX 5090 (32 GB).
Our test machine was equipped with a pair of Nvidia RTX 6000 Ada Generation GPUs. These absolutely ripped through our GPU rendering benchmarks, easily setting new records in tests that are multi-GPU aware. Compared to a single Nvidia RTX 6000 Ada GPU, V-Ray was around twice as fast. The gains in other apps were less dramatic, with an 83% uplift in Cinebench and 65% in KeyShot.
Comino’s liquid cooling system is custom-built, featuring bespoke water blocks and a 450ml coolant reservoir with integrated pumps.
Coolant flows through high-quality flexible rubber tubing, passing from component to component before completing the loop via a large 360mm radiator located at the rear of the machine.
2
Positioned alongside this radiator are three (yes, three) 1,000W SFX-L PSUs.
The system is cooled by a trio of Noctua 140mm 3,000 RPM fans, which drive airflow from front to back. Cleverly, the motherboard is housed in the front section of the chassis, ensuring the coldest air passes over the RAM and other aircooled components.
surprisingly straightforward.
Swapping out a GPU, while more intricate than on a standard desktop, isn’t as challenging as you might expect.
For upgrades, Comino can ship replacement GPUs pre-fitted with custom cooling blocks and rubber tubes. For our testing, Comino supplied a pair of AMD Radeon Pro W7900s. Despite their singleslot design,
process easy, with colour-coded blue and red connectors for cold and warm lines. Thanks to Comino’s no-spill design, the tubes come pre-filled with coolant, so there’s no need to add more after installation. (If you’re curious about the details, Comino provides a step-by-step guide in this video - www.tinyurl.com/Comino-GPU)
Users are given control over the fans. Using the buttons on the front of the machine, one can select from max performance, normal, silent, or super silent temperature profiles — each responding exactly how you’d expect in terms of acoustics.
Naturally, coolant evaporates over time and will need occasional topping up. Comino recommends checking levels every three months, which is easy to do via the reservoir window on the front panel. A bottle of coolant is included in the box for convenience.
As for memory and storage, they’re aircooled, making their maintenance no different from a standard desktop workstation.
All of our testing was conducted in ‘normal mode,’ where the noise level was consistent and acceptable. The ‘max performance’ mode, however, was much louder — better suited to a server room — and didn’t even show a significant performance boost. On the other hand, ‘super silent’ mode delivered an impressively quiet experience, with only a 3.5% drop in V-Ray rendering performance.
3.5% drop in V-Ray rendering performance. The front LED text display is where tech enthusiasts can geek out, cycling through deceptively heavy, weighing
The front LED text display is where tech enthusiasts can geek out, cycling through metrics like flow rates, fan and pump RPM, and the temperatures of the air, coolant, and components. For a deeper dive, the Comino Monitoring System offers access to this data and more via a web browser.
Maintenance and upgrades
With such an advanced cooling system, the Comino Grando can feel a bit intimidating. Thankfully, end user maintenance is
these GPUs are deceptively heavy, weighing in at 1.9 kg each —significantly more than the 1.2 kg of a stock W7900 fitted with its standard cooler. It’s easy to see why a crossbar bracket is essential to keep these hefty GPUs securely in place.
Installing the GPU is straightforward: plug it into the PCIe slot, secure it with screws as usual, and then plumb in the cooling system. The twist-and-click Quick Disconnect Couplings (QDCs) make this
Our system was equipped with 256 GB of high-speed Kingston DDR5 6,400 MHz CL32 REG ECC memory, operating at 4,800 MT/s. All eight slots were fully populated with 32 GB modules, maximising the Threadripper Pro processor’s 8-channel memory architecture for peak performance. For workloads requiring massive datasets, the system can support up to an impressive 2 TB of memory.
The included SSD is a standard 2TB Gigabyte AORUS Gen4, occupying one of the four onboard M.2 slots. However, there’s plenty of scope for performance upgrades. One standout option is the HighPoint SSD7505 PCIe 4.0 x16
4-channel NVMe RAID controller, which can be configured with four 4TB PNY XLR8 CS3140 M.2 SSDs in RAID 0 for blisteringly fast read/write speeds.
The Comino Grando blurs the boundaries between workstation and server. It’s versatile enough to fit neatly under a desk or mount in a 4U rack space (rack-mount kit included).
What’s more, with the Asus Pro WS WRX90E-SAGE
SE motherboard’s integrated BMC chip with IPMI (Intelligent Platform Management Interface) for out-ofband management, the Comino Grando can be fully configured as a remote workstation.
The Comino Grando is, without question, the fastest workstation we’ve ever tested, leaving air-cooled Threadripper Pro machines from major OEMs in its wake. The only close contender we’ve seen is the Armari Magnetar M64T7, equipped with a liquid-cooled 64-core AMD Ryzen Threadripper 7980X CPU (See our 2024 Workstation Special Report - www.tinyurl.
‘‘ With support for datacentre GPUs, the Comino Grando can potentially transform workflows by giving simulation and AI specialists ready access to vast amounts of computational power on the desktop
Perhaps its most compelling feature, however, is its GPU flexibility. The Nvidia RTX 6000 Ada Generation is a staple for high-end workstations, but very few can handle four — a feat typically reserved for dual Xeons. What’s more, with support for datacentre GPUs, the Comino Grando can potentially transform workflows by giving simulation and AI specialists ready access to vast amounts of computational power on the desktop.
However, you’ll need some serious muscle to lift it into the rack — it’s by far the heaviest workstation we’ve ever encountered. It will come as no surprise to learn that the system arrived on a wooden pallet.
com/WSR24). We wonder how Armari’s 96core equivalent would compare.
While the Comino Grando’s multicore performance is remarkable, what truly sets it apart from others is that it can operate in near-silence. The sheer level of engineering that has gone into this system is extraordinary, with superb build quality and meticulous attention to detail.
Of course, this level of performance doesn’t come cheap, but it can be seen as a smart investment in sectors like aerospace and automotive, where even the smallest optimisations really count.
Surprisingly, the Comino Grando isn’t significantly more expensive than an air-cooled equivalent. For instance, on dell.co.uk, a Dell Precision 7875 with similar specs currently costs just £1,700 less. However, two GPUs is the maximum and it would almost certainly come second in highly multi-threaded workloads.
What’s the best GPU or CPU for arch viz? Greg Corke tests a variety of processors in six of the most popular tools – D5 Render, Twinmotion, Lumion, Chaos Enscape, Chaos V-Ray, and Chaos Corona
When it comes to arch viz, everyone dreams of a silky-smooth viewport and the ability to render final quality images and videos in seconds. However, such performance often comes with a hefty price tag. Many professionals are left wondering: is the added cost truly justified?
To help answer this question, we put some of the latest workstation hardware
through its paces using a variety of popular arch viz tools. Before diving into the detailed benchmark results on the following pages, here are some key considerations to keep in mind.
Real-time viz software like Enscape, Lumion, D5 Render, and Twinmotion rely on the GPU to do the heavy lifting. These tools offer instant, high-quality visuals
directly in the viewport, while also allowing top-tier images and videos to be rendered in mere seconds or minutes.
The latest releases support hardware ray tracing, a feature built into modern GPUs from Nvidia, AMD and Intel. While ray tracing demands significantly more computational power than traditional rasterisation, it delivers unparalleled realism in lighting and reflections.
GPU performance in these tools is typically evaluated in two ways: Frames Per Second (FPS) and render time. FPS measures viewport interactivity — higher numbers mean smoother navigation and a better user experience — while render time, expressed in seconds, determines how quickly final outputs are generated. Both metrics are crucial, and we’ve used them to benchmark various software in this article.
For your own projects, aim for a minimum of 24–30 FPS for a smooth and interactive viewport experience. Performance gains above this threshold tend to have diminishing returns, although we expect hardcore gamers might disagree. Display resolution is another critical factor. If your GPU struggles to maintain performance, reducing resolution from 4K to FHD can deliver a significant boost.
It’s worth noting that while some arch viz software supports multiple GPUs, this only affects render times rather than viewport performance. Tools like V-Ray, for instance, scale exceptionally well
Nvidia DLSS (Deep Learning Super Sampling) is a suite of AI-driven technologies designed to significantly enhance 3D performance (frame rates), in real-time visualisation tools.
Applications including Chaos Enscape, Chaos Vantage and D5 Render, have integrated DLSS to deliver smoother experiences,
and to make it possible to navigate larger scenes on the same GPU hardware.
DLSS comprises three distinct technologies, all powered by the Tensor Cores in Nvidia RTX GPUs:
Super Resolution: This boosts performance by using AI to render higher-resolution frames from lower-resolution
inputs. For instance, it enables 4K-quality output while the GPU processes frames at FHD resolution, saving core GPU resources without compromising visual fidelity.
DLSS Ray Reconstruction: This enhances image quality by using AI to generate additional pixels for intensive ray-traced scenes.
Frame Generation: This increases performance by using AI to interpolate and generate extra frames. While DLSS 3.0 could generate one additional frame, DLSS 4.0, exclusive to Nvidia’s upcoming Blackwellbased GPUs, can generate up to three frames between traditionally rendered ones. When these three
technologies work together, an astonishing 15 out of every 16 pixels can be AI-generated. DLSS 4.0 will soon be supported in D5 Render, promising transformative performance gains. Nvidia has demonstrated that it can elevate frame rates from 22 FPS (without DLSS 4.0) to an incredible 87 FPS.
Chaos Corona is a CPU-only renderer designed for arch viz It scales well with more CPU cores. But the 96-core Threadripper Pro 7995WX, despite having six times the cores of the 16-core AMD Ryzen 9 9950X and achieving an overclocked all-core frequency of 4.87 GHz, delivers only three times the performance.
Chaos V-Ray is a versatile photorealistic renderer, renowned for its realism. It includes both a CPU and GPU renderer. The CPU renderer supports the most features and can handle the largest datasets, as it relies on system memory. Performance scales efficiently with additional cores.
V-Ray GPU works with Nvidia GPUs. It is often faster than the CPU renderer, and can make very effective use of multiple GPUs, with performance scaling extremely well. However, the finite onboard memory can restrict the size of scenes. To address this, V-Ray GPU includes several memorysaving features, such as offloading textures to system memory. It also offers a hybrid mode where both the CPU and GPU work together, optimising performance across both processors.
with multiple GPUs, but in order to take advantage you’ll need a workstation with adequate power and sufficient PCIe slots to accommodate the GPUs.
The amount of memory a GPU has is often more critical than its processing power. In some software, running out of GPU memory can cause crashes or significantly slow down performance. This happens because the GPU is forced to borrow system memory from the workstation via the PCIe bus, which is much slower than accessing its onboard memory.
The impact of insufficient GPU memory depends on your workflow. For final renders, it might simply mean waiting longer for images or videos to finish processing. However, in a real-time viewport, running out of memory can make navigation nearly impossible. In extreme cases, we’ve seen frame rates plummet to 1-2 FPS, rendering the scene completely unworkable.
Fortunately, GPU memory and
processing power usually scale together. Professional workstation GPUs, such as Nvidia RTX or AMD Radeon Pro, generally offer significantly more memory than their consumer-grade counterparts like Nvidia GeForce or AMD Radeon. This is especially noticeable at the lower end of the market. For example, the Nvidia RTX 2000 Ada, a 70W GPU, is equipped with 16 GB of onboard memory.
For real-time visualisation workflows, we recommend a minimum of 16 GB, though 12 GB can suffice for laptops. Anything less could require compromises, such as simplifying scenes and textures, reducing display resolution, or lowering the quality of exported renders.
CPU processing
CPU rendering was once the standard for most arch viz workflows, but today it often plays second fiddle to GPU rendering. That said, it remains critically important for certain software. Chaos Corona, a specialist tool for arch viz, relies entirely on the CPU for rendering. Meanwhile,
Chaos V-Ray gives users the flexibility to choose between CPU and GPU. Some still favour the CPU renderer for its greater control and the ability to harness significantly more memory when paired with the right workstation hardware. For example, while the top-tier Nvidia RTX 6000 Ada Generation GPU comes with an impressive 48 GB of on-board memory, a Threadripper Pro workstation can support up to 1 TB or more of system memory.
CPU renderers scale exceptionally well with core count — the more cores your processor has, the faster your renders. However, as core counts increase, frequencies drop, so doubling the cores won’t necessarily cut render times in half. Take the 96-core Threadripper Pro 7995WX, for example. It’s a powerhouse that’s the ultimate dream for arch viz specialists. But does it justify its price tag—nearly 20 times that of the 16-core AMD Ryzen 9950X—for rendering performance that’s only 3 to 4 times faster? As arch viz becomes more prevalent across AEC firms, that’s a tough call for many.
D5 Render is a real-time arch viz tool, based on Unreal Engine. Its ray tracing technology is built on DXR, requiring a GPU with dedicated ray-tracing cores from Nvidia, Intel, or AMD.
The software uses Nvidia DLSS, allowing Nvidia GPUs to boost real time performance. Multiple GPUs are not supported.
The benchmark uses 4 GB of GPU memory, so all GPUs are compared on raw performance alone. Real time scores are capped at 60 FPS.
Enscape is a very popular tool for real-time arch viz. It supports hardware ray tracing, and also Nvidia DLSS, but not the latest version.
For testing we used an older version of Enscape (3.3). This had some incompatibility issues with AMD GPUs, so we limited our testing to Nvidia. Enscape 4.2,
Lumion is a real-time arch viz tool known for its exterior scenes in context with nature.
The software will benefit from a GPU with hardware raytracing, but those with older GPUs can still render with rasterisation.
Our test scene uses 11 GB of GPU memory, which meant the 8 GB GPUs struggled. The Nvidia RTX A1000 slowed down, while the AMD Radeon Pro W7500 & W7600 caused crashes. The high-end AMD GPUs did OK against Nvidia, but slowed down in ray tracing.
memory, massively slowing down the 8 GB GPUs. The 8 GB AMD cards caused the software to crash with the Path Tracer. The high-end AMD GPUs did OK against Nvidia but were well off the pace in path tracing.
the latest release, supports AMD. We focused on real time performance, rather than time to render. The gap between the RTX 5000 Ada and RTX 6000 Ada was not that big. Our dataset uses 11 GB of GPU memory, which caused the software to crash when using the Nvidia RTX A1000 (8GB).
Allies and Morrison
Architype
Aros Architects
Augustus Ham
Consulting
B + R Architects
Cagni Williams
Coffey Architects
Corstorphine & Wright
Cowan Architects
Cullinan Studio
DRDH
Eight Versa
Elevate Everywhere 5plus
Flanagan Lawrence Focus on Design Gillespies
GRID Architects
Grimshaw
Hawkins/Brown
HLM Architects
Hopkins Architects
Hutchinson & Partners
John McAslan & Partners
Lyndon Goode
Architects
Makower Architects
Marek Wojciechowski Architects
Morris + Company
PLP Architecture
Plowman Craven
Rolfe Judd
shedkm
Studio Egret West
Via
Weston Williamson + Partners
with dedicated NVIDIA GPUs and AMD Threadripper CPUs we provide workstation level performance for the most demanding users
More sustainable
our vdesks are 62% less carbon impactful than a similarly specified physical workstation
More secure
centralised virtual resources are easier to secure than dispersed infrastructure
More efficient deployment and management is vastly quicker than with a physical estate
agile our customers are better able to deal with incoming challenges and opportunities
Cost accessible
we are much less expensive and much more transparent than other VDI alternatives
www.inevidesk.com info@inevidesk.com
Architects and designers are increasingly using text-to-image AI models like Stable Diffusion. Processing is often pushed to the cloud, but the GPU in your workstation may already be perfectly capable, writes Greg Corke
Stable Diffusion is a powerful textto-image AI model that generates stunning photorealistic images based on textual descriptions. Its versatility, control and precision have made it a popular tool in industries such as architecture and product design.
One of its key benefits is its ability to enhance the conceptual design phase. Architects and product designers can quickly generate hundreds of images, allowing them to explore different design ideas and styles in a fraction of the time it would take to do manually.
Stable Diffusion relies on two main processes: inferencing and training. Most architects and designers will primarily engage with inferencing, the process of generating images from text prompts. This can be computationally demanding, requiring significant GPU power. Training is even more resource intensive. It involves creating a custom diffusion model, which can be tailored to match a specific architectural style, client preference, product type, or brand. Training is often handled by a single expert within a firm.
There are several architecture-specific tools built on top of Stable Diffusion or other AI models, which run in a browser or handle the computation in the cloud. Examples include AI Visualizer (for Archicad, SketchUp, and Vectorworks), Veras, LookX AI, and CrXaI AI Image Generator. While these tools simplify access to the technology, and there are
many different ways to run vanilla Stable Diffusion in the cloud, many architects still prefer to keep things local.
Running Stable Diffusion on a workstation offers more options for customisation, guarantees control over sensitive IP, and can turn out cheaper in the long run. Furthermore, if your team already uses real-time viz software, the chances are they already have a GPU powerful enough to handle Stable Diffusion’s computational demands.
While computational power is essential for Stable Diffusion, GPU memory plays an equally important role. Memory usage in Stable Diffusion is impacted by several factors, including:
• Resolution: higher res images (e.g. 1,024 x 1,024 pixels) demand more memory compared to lower res (e.g. 512 x 512).
• Batch size: Generating more images in parallel can decrease time per image, but uses more memory.
• Version: Newer versions of Stable Diffusion (e.g. SDXL) use more memory.
• Control: Using tools to enhance the model’s functionality, such as LoRAs for fine tuning or ControlNet for additional inputs, can add to the memory footprint.
For inferencing to be most efficient, the entire model must fit into GPU
memory. When GPU memory becomes full, operations may still run, but at significantly reduced speeds as the GPU must then borrow from the workstation’s system memory, over the PCIe bus.
This is where professional GPUs can benefit some workflows, as they typically have more memory than consumer GPUs. For instance, the Nvidia RTX A4000 professional GPU is roughly the equivalent of the Nvidia GeForce RTX 3070, but it comes with 16 GB of GPU memory compared to 8 GB on the RTX 3070.
To evaluate GPU performance for Stable Diffusion inferencing, we used the UL Procyon AI Image Generation Benchmark. The benchmark supports multiple inference engines, including Intel OpenVino, Nvidia TensorRT, and ONNX runtime with DirectML. For this article, we focused on Nvidia professional GPUs and the Nvidia TensorRT engine. This benchmark includes two tests utilising different versions of the Stable Diffusion model — Stable Diffusion 1.5, which generates images at 512 x 512 resolution and Stable Diffusion XL (SDXL), which generates images at 1,024 x 1,024. The SD 1.5 test uses 4.6 GB of GPU memory, while the SDXL test uses 9.8 GB. In both tests, the UL Procyon benchmark generates a set of 16 images, divided into batches. SD 1.5 uses a batch size of 4, while SDXL uses a batch size of 1. A higher
benchmark score indicates better GPU performance. To provide more insight into real-world performance, the benchmark also reports the average image generation speed, measured in seconds per image. All results can be seen in the charts below.
Key takeaways
It’s no surprise that performance goes up as you move up the range of GPUs, although there are diminishing returns at the higher-end. In the SD 1.5 test, even the RTX A1000 delivers an image every 11.7 secs, which some will find acceptable.
The RTX 4000 Ada Generation GPU
Stable Diffusion architectural images courtesy of James Gray. Image above and right generated with ModelMakerXL, a custom trained LoRA by Ismail Seleit. Recently, Gray has been exploring Flux, a next-generation image and video generator. He recommends a 24 GB GPU. Follow Gray @ www.linkedin.com/in/ james-gray-bim
looks to be a solid choice for Stable Diffusion, especially as it comes with 20 GB of GPU memory. The Nvidia RTX 6000 Ada Generation (48 GB) is around 2.3 times faster, but considering it costs almost six times more (£6,300 vs £1,066) it will be hard to justify on those performance metrics alone.
The real benefits of the higher end cards are most likely to be found in workflows where you can exploit the extra memory. This includes handling larger batch sizes, running more complex models, and, of course, speeding up training.
Perhaps the most revealing test result
comes from SDXL, as it shows what can happen when you run out of GPU memory. The RTX A1000 still delivers results, but its performance slows drastically. Although it’s just 2 GB short of the 10 GB needed for the test, it takes a staggering 13 minutes to generate a single image — 70 times slower than the RTX 6000 Ada.
Of course, AI image generation technology is moving at an incredible pace. Tools including Flux, Runway and Sora can even be used to generate video, which demands even more from the GPU. When considering what GPU to buy now, it’s essential to plan for the future.
With HP’s new solution, workstation GPUs become shareable across the network, helping firms get the most out of their IT resources for AI training and inferencing, writes Greg Corke
Boosting your workstation’s performance by tapping into shared resources is nothing new.
Distributed rendering, through applications like V-Ray and KeyShot, allows users to harness idle networked computers for faster processing.
Z by HP Boost is a new take on this idea, with a specific focus on AI. The technology is primarily designed to deliver GPU power to those who need it, on-demand, by giving remote access to idle GPUs on the network. In short, it can turn a standard PC or laptop into a powerful GPUaccelerated workstation, extending the reach of AI to a much wider audience, and dramatically reduce processing time.
HP is primarily pitching Z by HP Boost at data scientists and AI developers for training or fine-tuning large language models (LLMs). However, Z by HP Boost is also well suited to inferencing, the application of the trained model to generate new results.
“We want companies, like architects, to both create their AI, fine tune their models, create custom models — those are big projects — but also create with AI, with the diffusion programs,” says Jim Nottingham, SVP & division president personal systems advanced compute and solutions, HP.
ing visuals based on an existing composition, such as a sketch or a screen grab of a CAD or BIM model.
To get the most out of Stable Diffusion design and architecture firms often finetune or create custom models tailored to specific styles. Training models is highly computationally demanding and is typically handled by a specialist within the firm. This person may already have access to a powerful workstation, equipped with multiple high-end GPUs. However, if that’s not the case, or they need more GPU power to accelerate a process that can take days, Z by HP Boost could be used to do the heavy lifting.
Inferencing in Stable Diffusion, where a pre-trained AI model is used to generate new images, is applicable to a much wider audience. While less computationally demanding than training, inferencing still needs serious GPU power, especially in terms of GPU memory, which often goes beyond what’s available in the GPUs typically used for CAD and BIM modelling in tools like Solidworks and Autodesk Revit.
given that Stable Diffusion is used mainly during the early design phases, meaning high-powered GPUs might be massively underutilised for most of the year.
Even if a local entry-level GPU does work with Stable Diffusion, generating an image can take several minutes (as demonstrated on page WS30 ). But with a high-end GPU like the Nvidia RTX 6000 Ada Generation this can be done in seconds. During the early design phase — especially when collaborating with clients and project teams — this speed advantage can be hugely beneficial, allowing for rapid iteration.
How Z by HP Boost works Firms can designate any number of GPUs on their network to be shared. This could be four high-performance Nvidia RTX 6000 Ada Generation or Nvidia A800 GPUs in a dedicated highend workstation like the HP Z8 Fury G5, or a single Nvidia RTX 2000 Ada Generation GPU in a compact system like the HP Z2 Mini G9. The only
Z by HP Boost makes it easier for more users to tap into this power without needing to equip everyone with a super-
charged workstation.
AI image generation
Z by HP Boost can be used for many different AI workflows. It currently supports PyTorch and TensorFlow, two of the most widely used open-source deep learning frameworks.
In AEC and product development, one of the most interesting use cases is Stable Diffusion, an AI image generator that can be used for early-stage design ideation. The AI model can be used to rapidly generate images –photorealistic or stylised – from a simple prompt. It can also serve as a shortcut for traditional rendering, generat-
particularly valuable,
Having access to GPUs on-demand is particularly valuable,
requirement is that the GPUs are housed in an HP Z Workstation.
Firms may choose to set aside one or more dedicated GPU workstations as a shared resource. Alternatively, to make the most out of the sometimes-vast numbers of GPUs scattered throughout an organisation, they can add GPUs from the workstations of end users. Those GPUs don’t have to be completely idle; they can also be shared when the owner is only doing light tasks. As Nvidia GPUs and drivers are good at multitasking it’s feasible, in theory, to model in CAD or BIM while someone else sets the same GPU to work in Stable Diffusion.
The Z by HP Boost software is installed on both the client and host machines. There are no restrictions on the client device — the PC or laptop just needs to run either Windows or Linux.
It’s very easy to configure a GPU for sharing. On the host device, simply select a GPU and assign it to the appropriate pool. Once that’s done, anyone with the necessary permissions has access. All they must do is choose the GPU from a list and select the application they want to run.
Once they’ve grabbed a GPU, it’s essentially theirs until they release it. However, the owner of the host machine always retains the right to reclaim the GPU if they want.
To ensure resources are used efficiently, GPUs are automatically returned to the pool after a period of inactivity. The default timeout is four hours, but this can be changed. A warning will appear on the
client device before the GPU is reallocated.
If the host workstation has multiple GPUs inside, each can be assigned to a different user. Currently, it’s one remote user per GPU, but there are plans for GPU slicing, which will enable multiple users to share the power of a single GPU simultaneously.
IT managers can configure the sharing however they want and, as Nottingham explains, this process can be aided by monitoring how resources are used. “We would like to work with customers to profile what’s their typical usage and design their sharing pool based on that usage.
“And maybe they can change it over time – they set up this one for night-time, they set up this one for daytime, or this one for Wednesdays – there’s going to be a lot of flexibility that we deliver.”
Nottingham believes Z by HP Boost is most interesting when multiple workstations are connected – many to many. “You just create a fabric, so you have more [GPUs] available, all the time.” This, he says, gives you a big performance boost without having to double your fleet.
Z by HP Boost doesn’t have to be used locally. As many of the AI workflows are not sensitive to latency it also works OK remotely. However, the ideal solution for remote working, as Nottingham explains, is with remote graphics software HP Anyware. In theory, one could have an architect or engineer remoting into a HP Z2 Mini in the office for bread-and-butter CAD or BIM work, who could then use Z by HP Boost to access an idle GPU on the same network to run Stable Diffusion.
Z by HP Boost offers an interesting proposition for design and engineering firms looking to roll out AI tools like Stable Diffusion to a broader audience.
By providing on-demand access to high-performance workstation GPUs, it allows firms to efficiently maximise their resources, utilising hardware that might otherwise sit idle under a desk, especially at night.
The alternative is equipping everyone with high-end GPUs or running everything in the cloud. Both options are expensive and cloud can also bring unpredictable costs.
Keeping things local also helps firms protect intellectual property, keeping proprietary designs and the models that are trained on their proprietary designs behind the firewall.
Additionally, Z by HP Boost enables teams to pool resources for AI development, offering a flexible solution for demanding projects.
Although Z by HP Boost is currently focused on AI, we see no reason why it couldn’t be used for other GPU-intensive tasks, such as reality modelling, simulation, or rendering. The absence of ‘AI’ in the product’s name may even suggest that this broader use is on the roadmap.
However, this would require buy-in from each software developer and could become complicated for workflows typically handled by dedicated clusters with fast interconnects.
It will be very interesting to see how this technology develops.
What’s the best CPU, memory and GPU to process complex reality modelling data? Greg Corke tests some of the latest workstation technology in Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture from Epic Games
Reality modelling is one of the most computationally demanding workflows in Architecture, Engineering and Construction (AEC). It involves the creation of digital models of physical assets by processing vast quantities of captured real-world data using technologies including laser scanning, photogrammetry and simultaneous localisation and mapping (SLAM).
Reality modelling has numerous applications, including providing context for new buildings or infrastructure, forming the basis for retrofit projects, or comparing “as-built” with “as-designed” for construction verification.
While there’s a growing trend to process captured data in the cloud, desktop processing remains the preferred method. Cloud can be costly, and uploading vast amounts of data — sometimes terabytes — is a significant challenge, especially when
working from remote construction sites with poor connectivity.
Processing reality capture data can take hours, making it essential to select the right workstation hardware. In this article, we explore the best processor, memory and GPU options for reality modelling, testing a variety of workflows in three of the most popular tools — Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture by Capturing Reality, a subsidiary of Epic Games.
Most AEC firms have tight hardware budgets and it’s easy to spend money in the wrong places, sometimes for very little gain. In some cases, investing in more expensive equipment can even slow you down!
Leica Cyclone 3DR
Leica Cyclone 3DR is a multi-purpose reality modelling tool, used for inspection, modelling and meshing. Processing is done
Below is a list of kit we used for testing. All machines were Windows 11 Pro 26100.
Armari Magnetar workstation with AMD Ryzen 7 9800X3D CPU (8 cores), 96 GB DDR5 5,600 MT/s memory and AMD Radeon Pro W7500 GPU (see page WS20).
predominantly on the CPU and several tasks can take advantage of multiple CPU cores. Some tasks, including the use of machine learning for point cloud classification, are also optimised for GPU.
For testing we focused on four workflows: scan-to-mesh, analysis, AI classification and conversion.
Scan-to-mesh: Compared to point clouds, textured mesh models are much easier to understand and easier to share, not least because the files are much smaller.
In our ‘scan-to-mesh’ test, we record the time it takes to convert a dataset of a building — captured with a Leica BLK 360 scanner — into a photorealistic mesh model. The dataset comprises a point cloud with 129 million points and accompanying images.
The process is multi-threaded but, as with many reality capture workflows,
Scan 3XS workstation with AMD Ryzen 9 9950X CPU (16 cores), 64 GB DDR5 5,600 MT/s memory or 128 GB DDR5 3,600 MT/s memory and Nvidia RTX 4500 Ada Generation GPU (see page WS16).
Scan 3XS workstation with Intel Core Ultra 9 285K CPU (8 P-cores and 16 E-cores), 64 GB DDR5 5,600 MT/s memory and Nvidia RTX 2000 Ada Generation GPU (see page WS17).
RTX A6000 GPU (see www. aecmag.com/workstations/ review-hp-z6-g5-a).
HP Z6 G5A workstation with AMD Threadripper Pro 7975WX CPU (32 cores), 128 GB DDR5 5,200 MT/s memory and Nvidia
Comino Grando workstation with overclocked AMD Threadripper Pro 7995WX CPU (96 cores), 256 GB DDR5 4,800 MT/s memory and Nvidia RTX
6000 Ada Generation GPU. (see page WS22).
We also tested a range of GPUs, including the Nvidia RTX A1000 (8 GB), RTX A4000 (16 GB), RTX 2000 Ada (16 GB), RTX 4000 Ada (20 GB), RTX 4500 Ada (24 GB) and RTX 6000 Ada (48 GB).
more CPU cores does not necessarily mean faster results. Other critical factors that affect processing time include the amount of CPU cache (a high-speed onchip memory for frequently accessed data), memory speed, and AMD Simultaneous Multithreading (SMT), a technology similar to Intel Hyper-Threading that enables a single physical core to execute multiple threads simultaneously. During testing, system memory usage peaked at 25 GB, which meant all test machines had plenty of capacity.
The most unexpected outcome was the 8-core AMD Ryzen 7 9800X3D outperforming all its competitors. It not only beat the 16-core AMD Ryzen 9 9950X and Intel Core Ultra 9 285K (8 performance cores and 16 efficient cores), but the multicore behemoths as well. With the 96core AMD Threadripper Pro 7995WX it appears to be a classic case of “too many cooks [cores] spoil the broth”!
The AMD Ryzen 7 9800X3D is a specialised consumer CPU, widely considered to be the fastest processor for 3D gaming thanks to its advanced 3D V-Cache technology. It boasts 96 MB of L3 cache, significantly more than comparative processors. This allows the CPU to access frequently-used data quicker, rather than having to pull it from slower system memory (RAM).
But we expect that having lots of fast cache is not the only reason why the AMD Ryzen 7 9800X3D comes out top in our
scan-to-mesh test – after all, Threadripper Pro is also well loaded, with the top-end 7995WX having 384 MB of L3 cache which is spread across its 96 cores. To achieve a high number of cores, modern processors are made up of multiple chiplets or CCDs. In the world of AMD, each CCD typically has 8 cores, so a 16core processor has two CCDs, a 32-core processor has four CCDs, and so on.
Communication between cores in different CCDs is inherently slower than cores within the same CCD, and since the AMD Ryzen 7 9800X3D is made up of a single CCD that has access to all that L3 cache, we expect this gives it an additional advantage. It will be interesting to see how the recently announced 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D compare. Both processors feature 128 MB of L3 cache and comprise two CCDs.
Simultaneous Multithreading (SMT) also has an impact on performance. With the AMD Ryzen 9 9950X, for example, disabling SMT in the BIOS cut processing time by as much as 15%. However, it had the opposite effect with the AMD Ryzen 7 9800X3D, increasing processing time by 32%.
Memory speed also has an impact on performance. The AMD Ryzen 9 9950X processor was around 7% slower when configured with 128 GB RAM running at 3,400 MT/sec than it was with 64 GB RAM running at the significantly faster 5,600 MT/sec.
Analysis: In our analysis test we compare a point cloud to a BIM model, recording the time it takes to calculate a colour map that shows the deviations between the two datasets. During testing, system memory usage peaked at 19 GB.
The process is multi-threaded, but certain stages only use a few cores. As with scan-to-mesh, more CPU cores does not necessarily mean faster results, and CPU cache, SMT and memory speed also play an important role. Again, the AMD Ryzen 7 9800X3D bagged first spot, completing the test 16% faster than its closest rival, the Intel Core Ultra 9 285K.
The big shock came from the 16-core AMD Ryzen 9 9950X, which took more than twice as long as the 8-core AMD Ryzen 7 9800X3D to complete the test. The bottleneck here is SMT, as disabling it in the BIOS, so each of the 16 cores only performs one task at a time, slashed the test time from 91 secs to 56 secs.
Getting good performance out of the Threadripper Pro processors required even more tuning. Disabling SMT on its own had a minimal impact, and it was only when the Cyclone 3DR executable was pinned to a single CCD (8 cores, 16 threads) that times came down. But this level of optimisation is probably not practical, not least because all workflows and datasets are different.
AI classification: Leica Cyclone 3DR features an AI-based auto-classification algorithm designed to ‘intelligently
classify’ point cloud data. The machine learning model has been trained on large amounts of terrestrial scan data and comes with several predefined models for classification.
The process is built around Nvidia CUDA and therefore requires an Nvidia GPU. However, the CPU is still used heavily throughout the process. We tested a variety of Nvidia RTX professional GPUs using an AMD Ryzen 9 9950X-based workstation with 64 GB of DDR5 memory.
The test records the time it takes to classify a point cloud of a building with 129 million points using the Indoor Construction Site 1.3 machine learning model. During testing, system memory usage peaked at 37 GB and GPU memory usage at a moderate 3 GB.
The big takeaway from our tests is that the CPU does the lion’s share of the processing. The Nvidia RTX GPU is essential, but only contributes modestly to the overall time. Indeed, there was very little difference between most of the Nvidia RTX GPUs and even the entry-level Nvidia RTX A1000 was only 22% slower than the significantly more powerful Nvidia RTX 4500 Ada.
Conversion: This simple test converts a Leica LGSx file into native Cyclone 3DR. The dataset comprises a point cloud of a highway alignment with 594 million points. During testing, system memory usage peaked at 11 GB.
As this process is largely single threaded it’s all about single core CPU performance. Here, the Intel Core Ultra 9 285K takes first place, closely followed by the AMD Ryzen 9 9950X in second. With a slightly slower peak frequency the AMD Ryzen 7 9800X3D comes in third. In this case, the
larger L3 cache appear to offer no benefit.
The Threadripper Pro 7975WX and Threadripper Pro 7995WX lag behind — not only because they have a lower frequency, but are based on AMD’s older ‘Zen 4’ architecture, so have a lower Instructions Per Clock (IPC).
Leica Cyclone Register 360
Leica Cyclone Register 360 is specifically designed for point cloud registration, the process of aligning and merging multiple point clouds into a single, unified coordinate system.
For testing, we used a 99 GB dataset of the Italian Renaissance-style ‘Breakers’ mansion in Newport, Rhode Island. It includes a total of 39 setups from a Leica RTC360 scanner, around 500 million points and 5K panos. We recorded the time it takes to import and register the data.
The process is multi-threaded, but to ensure stability the software allocates a specific number of threads depending on how much system memory is available. In 64 GB systems, the software allocates five threads while for 96 GB+ systems it’s six.
The Intel Core Ultra 9 285K processor led by some margin, followed by the 16core AMD Ryzen 9 9950X and 96-core Threadripper Pro 7995WX. Interestingly, this was the one test where the 8-core AMD Ryzen 7 9800X3D was not one of the best performers. However, as the GPU does a small amount of processing, and Leica Cyclone Register 360 has a preference for Nvidia GPUs, this could be attributed to the workstation having the entry-level AMD Radeon Pro W7500 GPU.
Notably, memory speed appears to play a crucial role in performance. The AMD Ryzen 9 9950X, configured with 128 GB of 3,400 MT/sec memory, was able to utilise six threads for the process, but was 20%
slower than when configured with 64 GB of faster 5,600 MT/sec memory, which only allocated five threads.
RealityCapture from Epic Games
RealityCapture, developed by Capturing Reality — a subsidiary of Epic Games — is an advanced photogrammetry software designed to create 3D models from photographs and laser scans. Most tasks are accelerated by the CPU, but there are certain workflows that also rely on GPU computation.
Image alignment in RealityCapture refers to the process of analysing and arranging a set of photographs or scans in a 3D space, based on their spatial relationships. This step is foundational in photogrammetry workflows, as it determines the relative positions and orientations of the cameras or devices that captured the input data.
We tested with two datasets scanned by R-E-A-L.iT, Leo Films, Drone Services Canada Inc, both available from the RealityCapture website.
The Habitat 67 Hillside Unreal Engine sample project features 3,199 images totalling 40 GB, 1,242 terrestrial laser scans totalling 90 GB, and uses up 60 GB of system memory during testing.
The Habitat 67 Sample, a subset of the larger dataset, features 458 images totalling 3.5 GB, 72 terrestrial laser scans totalling 3.35 GB, and uses up 13 GB of system memory.
The 32-core Threadripper Pro 7975WX took top spot in the large dataset test, with the AMD Ryzen 9 9950X, AMD Ryzen 7 9800X3D and 96-core AMD Threadripper Pro 7995WX not that far behind. Again, SMT needed to be disabled in the higher core count CPUs to get the best results.
Memory speed appears to have a huge impact on performance. The AMD Ryzen 9 9950X processor was around 40% slower when configured with 128 GB of RAM running at 3,400 MT/sec than it was with 64 GB running at the significantly faster 5,600 MT/sec.
Import laser scan: This process imports a collection of E57 format laser scan data and converts it into a RealityCapture point cloud with the .lsp file extension. Our test used up 13 GB of system memory.
Since this process relies heavily on single-threaded performance, single-core speed is what matters most. The Intel Core Ultra 9 285K comes out on top, followed closely by the AMD Ryzen 9 9950X. With
a slightly lower peak frequency, the AMD Ryzen 7 9800X3D takes third place. The Threadripper Pro 7975WX and 7995WX fall behind, not just due to lower clock speeds but also because they’re built on AMD’s older Zen 4 architecture, which has a lower Instructions Per Clock (IPC).
Reconstruction is a very compute intensive process that involves the creation of a watertight mesh. It uses a combination of CPU and Nvidia GPU, although there’s also a ‘preview mode’ which is CPU only.
For our testing, we used the Habitat 67 Sample dataset at ‘Normal’ level of detail. It used 46 GB of system memory and 2 GB of GPU memory.
With a variety of workstations with different processors and GPUs, it’s hard to pin down exactly which processor is best for this workflow — although the 96-core Threadripper Pro 7995WX workstation with Nvidia RTX 6000 Ada GPU came out top. To provide more clarity on GPUs, we tested a variety of add-in boards in the same AMD Ryzen 9 9950X workstation. There was relatively good performance scaling across the mainstream Nvidia RTX range.
The combination of AMD’s ‘Zen 5’ architecture, fast DDR5 memory, a single chiplet design, and lots of 3D V-Cache, looks to make the AMD Ryzen 7 9800X3D
processor a very interesting option for a range of reality modelling workflows — especially for those on a budget. The AMD Ryzen 7 9800X3D becomes even more interesting when you consider that it’s widely regarded to be for gamers. The chip is not offered by any of the major workstation OEMs — only specialist system builders like Armari.
However, before you rush out and part with your hard-earned cash, it is important to understand a few things.
1) The AMD Ryzen 7 9800X3D processor currently has a practical maximum capacity of 96 GB, if you want fast 5,600 MT/sec memory. This is an important consideration if you work with large datasets. If you run out of memory, the processor will have to swap data out to the SSD, which will likely slow things down considerably.
The AMD Ryzen 9 9800X3D can support up to 192 GB of system memory, but it will need to run at a significantly slower speed (3,600 MT/sec). And as our tests have shown, slower memory can have a big impact on performance.
2) AMD recently announced two additional ‘Zen 5’ 3D V-Cache processors. It will be interesting to see how they compare. The 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D both have slightly more L3 cache (128 MB) than the 8-core Ryzen 7 9800X3D (96 MB). However, they are made up of two separate chiplets (CCDs), so communication between the cores in different CCDs could slow things down.
3) Most of the reality models we used for testing are not that big, with the exception of the Habitat 67 dataset, which we used to test certain aspects of RealityCapture. Larger datasets require more memory. For example, reconstructing the full Habitat 67 RealityCapture dataset on the 96-core Threadripper Pro 7995WX workstation used 228 GB of system memory at peak, out of the 256 GB in the machine - and took more than half a day to process. Workstations with less system memory will likely have to push some of the data into temporary swap space on the SSD. Admittedly, as modern PCIe NVMe SSDs offer very fast read-write performance, this is not necessarily the colossal
bottleneck it used to be when you had to swap out data to mechanical Hard Disk Drives (HDDs).
4) Multi-tasking is often important for reality modelling, as the processing of data often involves several different stages from several different sources. At any given point you may need to perform multiple operations at the same time, which can put a massive strain on the workstation. As the AMD Ryzen 7 9800X3D processor has only 8-cores and is effectively limited to 96 GB of fast system memory, if you throw more than one task at the machine at a time things will likely slow down considerably. Meanwhile Threadripper Pro is much more scalable as there are processors with 12-to 96-cores, and the platform supports
you work with, and the complexity of your workflows. For lighter tasks, the AMD Ryzen 7 9800X3D looks to be an excellent budget choice, but for more complex projects, especially those that require multi-tasking, Threadripper Pro should deliver a much more flexible and performant platform. Of course, you still need to choose between the different models, which vary in price considerably and, as we have found in some of our tests, fewer cores is sometimes better.
‘‘ Two of our test workflows rely on Nvidia GPUs, but because they share some of the workload with the CPU, the performance gains from more powerful GPUs are less pronounced compared to entirely GPU-driven tasks like ray trace rendering
up to 2 TB of DDR5-5200 ECC memory. For a crude multi-tasking test, we performed two operations in parallel — alignment in RealityCapture and meshing in Leica Cyclone 3DR. The Threadripper Pro 7995WX workstation completed both tests in 200 secs, while AMD Ryzen 7 9800X3D came in second in 238 secs. We expect this lead would grow with larger datasets or more concurrent processing tasks.
In summary, your choice of processor will depend greatly on the size of datasets
Two of our tests — Reconstruction in RealityCapture and AI classification in Leica Cyclone 3DR — rely on Nvidia GPUs. However, because these processes share some of the workload with the CPU, the performance gains from more powerful GPUs are less pronounced compared to entirely GPU-driven tasks like ray trace rendering. There’s a significant price gap between the Nvidia RTX A1000 (£320) and the Nvidia RTX 6000 Ada Generation (£6,200). For reconstruction in RealityCapture, investing in the higher-end model is probably easier to justify, as our tests showed computation times could be cut in two. However, for AI classification in Leica Cyclone 3DR, the performance gains are much smaller, and there seem to be diminishing returns beyond the Nvidia RTX 2000 Ada Generation. Whilelargerdatasetsmaydelivermore substantial benefits, GPU memory — a keyadvantageofthehigher-endcards— appearstobelesscrucial.
Dell has simplified its product portfolio, with the introduction of three new PC categories – Dell for ‘play, school and work’, Dell Pro for ‘professional-grade productivity’ and Dell Pro Max ‘for maximum performance’
The rebranding spells an end to the company’s long-standing Precision workstation brand, which will be replaced by Dell Pro Max. It also signals a move away from the term “workstation”. On Dell’s website “workstation” appears only in fine print, as the company now favours high-performance, professionalgrade PC when describing Dell Pro Max.
To those outside of Dell, however, Dell Pro Max PCs are unmistakably workstations, with ISV certification and traditional workstation-class components, including AMD Threadripper Pro processors, Nvidia RTX graphics, highspeed storage, and advanced memory.
Dell has also simplified the product tiers within each of the new PC categories. Starting with the Base level, users can upgrade to the Plus tier for more scalable performance or the Premium tier, which Dell describes as delivering the ultimate in mobility and design.
“We want customers to spend their valuable time thinking about workloads they want to run on a PC, the use cases they’re trying to solve a problem for, not what sub brand, not understanding and figuring out our nomenclature, which at times, has been a bit confusing,” said Jeff Clarke, vice chairman and COO, Dell.
To coincide with the rebrand, Dell has introduced two new base level mobile workstations – the Dell Pro Max 14 and 16 – built around Intel Core Ultra 9 (Series 2) processors and Nvidia RTX GPUs. The full portfolio with the Plus and Premium tier, including AMD options, will follow.
■ www.dell.com
MSCAD Services has launched WaaS, a ‘Workstation as a Service’ offering, in partnership with Lenovo workstations and Equinix Data Centres.
The global service comprises private cloud solutions and rentable workstations, on a per user, per month basis. Contracts run from one to 36 months.
According to IMSCAD, the service is up to 40% cheaper than high-end instances from the public cloud, and the
workstations perform faster. Users get a 1:1 connection to a dedicated workstation featuring a CPU up to 6.0 GHz and a GPU with up to 24 GB of VRAM.
“Public cloud pricing is far too high when you want to run graphical applications and desktops,” said CEO Adam Jull. “Our new service is backed by incredible Lenovo hardware and the best remoting software from Citrix, Omnissa (formally VMware Horizon) and TGX to name a few.”
■ www.imscadservices.com
vidia has the consumer-focused RTX 50-Series line up of Blackwell GPUs.
The flagship GeForce RTX 5090 comes with 32 GB of GDDR7 memory, which would suggest that professional Blackwell Nvidia RTX boards, which are expected to follow soon, could go above the current max of 48 GB of the Nvidia RTX 6000 Ada Gen. ■ www.nvidia.com
HP is gearing up for the Spring 2025 launch of its first-ever 18-inch mobile workstation, which has been engineered to provide up to 200W TDP to deliver more power for next-generation discrete graphics.
The laptop will feature ‘massive memory and storage’, will be nearly the same size as a 17” mobile workstation and will be cooled by 3x turbo fans and HP Vaporforce Thermals.
■ www.hp.com/z
vidia has announced Project Digits, a tiny system designed to allow AI developers and researchers to prototype large AI models on the desktop.
The ‘personal AI supercomputer’ is powered by the GB10 Grace-Blackwell, a shrunk down version of the Armbased Grace CPU and Blackwell GPU system-on-a-chip (SoC) .
■ www.nvidia.com