D3D_FEBMARCH25

Page 1


TIME TO HIT START ON AI

Introducing Onshape CAM Studio: The First True Cloud-Native CAD/CAM Solution

Accelerating Design and Manufacturing Teams with Advanced Cloud-Native Solutions

ONSHAPE CAM STUDIO DELIVERS:

Faster DFM reviews with real-time collaboration between design and manufacturing

Faster tool path generation and simulations with the cloud HPC at your fingertips

Reduced total time to market with the single source of truth for a combined CAD/CAM workflow

EDITORIAL

Editor

Stephen Holmes

stephen@x3dmedia.com

+44 (0)20 3384 5297

Managing Editor

Greg Corke

greg@x3dmedia.com

+44 (0)20 3355 7312

Consulting Editor

Jessica Twentyman jtwentyman@gmail.com

Consulting Editor

Martyn Day martyn@x3dmedia.com

+44 (0)7525 701 542

Staff Writer

Emilie Eisenberg emilie@x3dmedia.com

DESIGN/PRODUCTION

Design/Production

Greg Corke

greg@x3dmedia.com

+44 (0)20 3355 7312

ADVERTISING

Group Media Director

Tony Baksh tony@x3dmedia.com

+44 (0)20 3355 7313

Deputy Advertising Manager

Steve King steve@x3dmedia.com

+44 (0)20 3355 7314

US Sales Director

Denise Greaves denise@x3dmedia.com

+1 857 400 7713

SUBSCRIPTIONS

Circulation Manager

Alan Cleveland alan@x3dmedia.com

+44 (0)20 3355 7311

ACCOUNTS

Accounts Manager

Charlotte Taibi charlotte@x3dmedia.com

Financial Controller

Samantha Todescato-Rutland sam@chalfen.com

ABOUT

Welcome to the 150th issue of DEVELOP3D! (There’ll be cake later.) In the modern age of gnat-like attention spans, multi-screen content consumption and chaotic work/life balances, our continued existence is testament to the longstanding appeal of magazines. A paper magazine is still a wonderful thing to hold and even more fun to look back on years later. Back when we published our first issue in 2008, AI was still a sci-fi movie plotline. Fast-forward to today, and it’s the major focus of this issue, as new AI-powered tools look to enhance, automate and optimise all aspects of our workflows.

As part of this special AI-themed issue, we take a look at an array of these new tools and what they have to offer the different stages of product development workflows. We also hear from NASA about how the space agency is automating its generative design tools with AI. We hear from executives at the major CAD vendors about how they view AI and speak to leading educators about how they cover these new tools in the courses they’re delivering to members of tomorrow’s workforce.

Elsewhere, we look at how we’re all going to need to keep our IP secure in a world where AI tools produce masses of ideation; learn how one design studio has gone ahead and built its own AI tools; and we speak to a visualisation artist who feels that AI isn’t quite ready to replace the fine eye of a human - or not just yet.

We’ve also got first looks at Onshape’s new cloud native machining package CAM Studio and the Sony XR headset developed in collaboration with Siemens Digital Industries.

If that wasn’t enough, there’s also a huge, not-to-be-missed workstation special, packed with expert reviews and insight to power you through 2025.

So, here’s to the next 150 issues. Of course, by the time Issue 300 rolls around, we’ll all have been replaced by AI – and I, for one, will welcome our robot overlords.

DEVELOP3D is published by X3DMedia 19 Leyden Street London E1 7LE, UK

T. +44 (0)20 3355 7310

F. +44 (0)20 3355 7319

X3DMedia

The future of product development technology

Silver sponsors
Bronze sponsors

Automate SOLIDWORKS

manufacturing processes & sell digitally using DriveWorks

DriveWorks is flexible and scalable. Start for free, upgrade anytime. DriveWorksXpress is included free inside SOLIDWORKS or start your free 30 day trial of DriveWorks Solo.

DriveWorks Pro

30DAY FREETRIAL

DriveWorksXpress

Entry level design automation software included free inside SOLIDWORKS®

Entry level SOLIDWORKS part and assembly automation

Create a drawing for each part and assembly

Find under the SOLIDWORKS tools menu

Modular SOLIDWORKS® automation & online product configurator software

One time setup

DriveWorks Solo

SOLIDWORKS® part, assembly and drawing automation add-in

Automate SOLIDWORKS parts, assemblies and drawings

Generate production ready drawings, BOMs & quote documents automatically Enter product specifications and preview designs inside SOLIDWORKS

Free online technical learning resources, sample projects and help file

Sold and supported by your local SOLIDWORKS reseller

Complete SOLIDWORKS part, assembly and drawing automation

Automatically generate manufacturing and sales documents

Configure order specific designs in a browser on desktop, mobile or tablet

Show configurable design details with interactive 3D previews

Integrate with SOLIDWORKS PDM, CRM, ERP, CAM and other company systems

Scalable and flexible licensing options

Sold and supported by your local SOLIDWORKS reseller

Set up once and run again and again. No need for complex SOLIDWORKS macros, design tables or configurations.

Save time & innovate more

Automate repetitive SOLIDWORKS tasks and free up engineers to focus on product innovation and development.

Eliminate errors

DriveWorks rules based SOLIDWORKS automation eliminates errors and expensive, time-consuming design changes.

Integrate with other systems

Connect sales & manufacturing

Validation ensures you only offer products that can be manufactured, eliminating errors and boosting quality.

DriveWorks Pro can integrate with other company systems, helping you work more efficiently and effectively.

Intelligent guided selling

Ensure your sales teams / dealers configure the ideal solution every time with intelligent rules-based guided selling.

NEWS

DEVELOP3D LIVE returns on 26 March 2025, Intel launches new ‘Arrow Lake’ laptop processors, Innoactive drives XR work at Volkswagen and more

FEATURES

Comment: Greg Mark of Backflip on the AI opportunity

Comment: Sara El-Hanfy of Innovate UK on AI adoption

Visual Design Guide: Nike x Hyperice boot

COVER STORY AI IN PRODUCT DEVELOPMENT

Expert Panel: How do the Big Four view AI?

Interview: Ryan McClelland of NASA

Top of the class: Teaching AI on design courses

DIY AI: Vital Auto builds its own toolset

Protecting innovation and IP at Yamaha Motors

Interview: Spencer Livingstone of S.VIII

First look: Sony and Siemens bring XR to life

Onshape unveils an impressive CAM offering

THE LAST WORD

As this 150th issue of DEVELOP3D demonstrates, AI has come a long way – but it still has much further to go, with huge implications for product designers and engineers writes Stephen Holmes

DEVELOP3D LIVE RETURNS TO THE UK ON 26 MARCH WITH A PACKED SPEAKER LINE-UP

» Top executives from Catia, Solidworks, Siemens, Autodesk and Shapr3D will share the stage with exciting designers and engineers at our one-of-a-kind annual show

Astellar line-up of technology executives from some of the leading CAD companies has been announced for DEVELOP3D Live, to be held on 26 March 2025 in Coventry, UK.

DEVELOP3D LIVE is the UK's leading conference and exhibition celebrating design, engineering and manufacturing technologies and how they bring worldleading products to market faster.

Solidworks CEO Manish Kumar is returning to the event, bringing the latest updates and news about Solidworks fresh from the 3DExperience World user event straight to our UK audience.

Also representing the Dassault Systèmes family will be Catia CEO Olivier Sappin, with the engineering software company making its DEVELOP3D LIVE stage debut this year. Sappin’s exciting presentation focuses on the future of engineering and how human imagination can successfully leverage AI and generative design to sustainably redefine the next generation of products.

Siemens Digital Industries Software will be represented on the main stage by Oliver Duncan, senior product manager for Siemens Cloud Solutions, giving attendees the benefit of his insights into the latest developments from NX, Solid Edge and Designcenter.

And Autodesk’s Clinton Perry will discuss the latest developments from the company’s product design and manufacturing portfolio, including Fusion.

Shapr3D CEO István Csanády is also heading to the UK, with an update on the increasing number of features that Shapr3D offers professional designers.

The main stage of the event has always seen exciting presentations from companies that we feel are leading the way in using new technologies in their design processes. In this respect, 2025 will be no different, with keynotes covering topics as diverse as renewable energy, AI, automotive design, consumer electronics and others yet to be announced.

The other stages will be the venue for panel discussions, looking at XR for product design, the rise of new head mounted display technologies, and software built to offer designers new ways to approach design and collaborate with team members around the world. Another panel will look at computational design tools and their relationship with digital manufacturing technologies, as well as how to take generative design technology and make its output more accessible to wider swathes of the industry.

Allowing attendees the chance to try out many of the technologies discussed

via hands-on demos has always been a focal point of the exhibition space at DEVELOP3D LIVE, providing an exciting learning arena for attendees.

Our free-to-attend, single-day event incorporates three conference streams running alongside the exhibition space, all based at the Warwick Arts Centre. This venue is located in the heart of the West Midlands, a powerhouse area for British product development, automotive engineering and advanced manufacturing.

Within just over one hour’s reach of London and the aerospace engineering hubs of Derby and Cranfield, and within two hours of Bristol, Manchester and Liverpool, the event consistently attracts a huge range of industries and professionals.

DEVELOP3D LIVE show director Martyn Day says the 2025 event is lining up to be an unmissable episode in the show’s history. “As always, DEVELOP3D LIVE is the only place where you can see the major design software companies on the same stage, giving attendees the best chance to see where the newest and most exciting developments are being realised,” said Day.

DEVELOP3D readers can find out more about speakers as they’re announced, and also obtain their free access pass from the event website. www.develop3dlive.com

Join us at DEVELOP3D LIVE to hear about the latest developments in design, engineering and manufacturing technologies

INTEL LAUNCHES NEW 'ARROW LAKE' LAPTOP PROCESSORS

Intel has introduced new ‘Arrow Lake’ laptop processors, which should make their way into mobile workstations later this year. The new offerings include six Intel Core Ultra 200HX series processors aimed at high-performance laptops, and five Intel Core Ultra 200H series processors for mainstream thin-and-lights.

Intel’s new processors prioritise general processing performance over AI capabilities. Compared to last year’s ‘Lunar Lake’ Intel Core Ultra 200V series, the new chips feature significantly more CPU cores, but come with a much less powerful Neural Processing Unit (NPU). With 13 TOPS compared to 48 TOPS, the NPU falls short of meeting the requirements for Microsoft Copilot+ compatibility.

The integrated GPU in the Intel Core Ultra 200HX is also less powerful than the one that comes with the Intel Core Ultra 200V. This suggests that in a mobile workstation, Intel’s flagship laptop processor is most likely to be paired with a discrete GPU, such as Nvidia RTX.

Key features of the Intel Core Ultra 200HX and H series mobile processors include up to 24 cores for HX-series (eight Performance-cores (P-cores) and 16 Efficient-cores; and up to 16 cores for H-series (six P-cores, eight E-cores and two low-power E-cores).

According to Intel, the flagship Intel Core Ultra 9 285HX processor offers up to 41% better multi-threaded performance, as tested in the Cinebench 2024 rendering benchmark, compared to the previous generation Intel Core i9-14900HX. www.intel.com

Innoactive helps drive XR work at Volkswagen

Innoactive XR Streaming and the Nvidia Omniverse platform are enabling automotive giant Volkswagen to validate photorealistic digital twins using Apple’s Vision Pro XR headsets for the first time.

By harnessing the power of the OpenUSD format and Omniverse, combined with Nvidia’s newly released spatial streaming, Innoactive’s XR Streaming uses this workflow to bring industrial digital twins to spatial devices anywhere, anytime.

Innoactive executives say that its product’s one-click XR streaming is capable of streamlining XR workflows with instant access to immersive 3D environments, supporting browser streaming and streaming to standard VR headsets for cost-effective solutions, while supporting the Apple Vision Pro.

Shapeways takes stake in Thangs

The resurrection of online 3D printing service provider Shapeways has continued with its announcement of the acquisition of a controlling share in Thangs, a collaborative 3D file sharing and discovery community.

Shapeways executives say that the acquisition marks the second step of a relaunch plan created by its new management team to drag the company back from its July 2024 bankruptcy.

The Thangs platform hosts 3D search with more than 24 million 3D printable models in its index, along with tools, IP protection and membership options www.shapeways.com

Nvidia advances on neural rendering

Nvidia DLSS 4, the latest release of the suite of neural rendering technologies that use AI to boost 3D performance, will be supported in visualisation software – D5 Render, Chaos Vantage and Unreal Engine – from February 2025.

The headline feature of DLSS 4, Multi Frame Generation, brings revolutionary performance versus traditional native rendering. It is an evolution of Single Frame Generation, which was introduced in DLSS 3 to boost frame rates with Nvidia Ada Generation GPUs using AI. www.nvidia.com

Stratasys gets funding boost

resh from a tumultuous year of acquisition rumours, Stratasys has announced a $120 million investment by Israeli private equity fund Fortissimo Capital.

The financial boost aims to help Stratasys strengthen its balance sheet and be better positioned to capture new market opportunities.

With this transaction, Fortissimo will hold approximately 15.5% of Stratasys’ issued and outstanding ordinary shares. Fortissimo managing partner Yuval Cohen will join the Stratasys board of directors, replacing a Stratasys director who had yet to be named at the time of going to press.

www.stratasys.com

Intel's new 'Arrow Lake' processors prioritise general processing performance over AI capabilities

DAIMLER TRUCK ADDS REMOTE SPARE

PART PRODUCTION

Daimler Truck is collaborating with 3D Systems, Oqton and WibuSystems on a new initiative aimed at producing spare parts for buses, on demand.

This will enable Daimler Buses-certified 3D printing partners to produce parts as and when they are needed, avoiding supply chain bottlenecks and reducing delivery times by a claimed 75%.

Digital rights management capabilities, meanwhile, will protect all intellectual property (IP) belonging to Daimler Buses.

The process will rely on the expertise of 3D Systems in 3D printing technology, materials and applications. Its former software arm Oqton will provide process know-how, while Wibu-Systems will provide digital rights and IP management technology.

According to executives at Daimler Truck, the collaboration will enable it to manufacture spare parts locally for various underhood and cabin interior applications. These are likely to include pins, covers and inserts.

Commercial truck, bus and touring coach companies, meanwhile, will likely realise substantial indirect cost savings if vehicle downtime due to maintenance can be reduced.

“The digital rights management enables us to shorten service times through decentralised production in order to further maximise productivity and revenue for commercial vehicle companies,” explained Ralf Anderhofstadt, head of the Center of Competence in Additive Manufacturing at Daimler Truck and Buses.

“In addition, the sensible use of industrial 3D printing results in reducing the complexity in the supply chains.”

Participating bus companies or service bureaus can join Daimler Buses’ network of 3D printing certified partners once they have purchased a license for 3DXpert through Daimler Buses’ Omniplus 3D-Printing License eShop.

The prepare-and-print licence enables the customer or service partner to decrypt design files relating to the part needed for a specific repair job and only produce that part in the exact quantity required.

Currently, the solution is designed to 3D print parts on 3D Systems’ SLS 380.

In the future, Daimler Buses anticipates that service bureaus will also be able to connect any of 3D Systems’ polymer or metal 3D printer to the solution. www.3dsystems.com

Bus operators will soon be able to print their own parts in the event of a breakdown

ROUND UP

Siemens Digital Industries Software has announced updates to its Simcenter portfolio that look set to benefit customers in the automotive and aerospace industries, since they focus on aerostructure analysis, electric motor design, gear optimisation and smart virtual sensing www.sw.siemens.com

Autodesk VRED has announced its latest updates for its 2025.3 release, implementing a new colour management system for visual quality called OpenColorIO to give users improved control over the colour in a scene and introduce new colour grading methods www.autodesk.com

Rapid Fusion has announced Medusa, the first UK-built large format hybrid 3D printer, backed by Innovate UK, along with project partners RollsRoyce, AI Build and the National Manufacturing Institute Scotland (NMIS). The gantry-style machine is expected to cost in the region of £500,000 www.rapidfusion.co.uk

HP bets big on AMD Ryzen AI Max PRO

HP has launched two new workstations built around the AMD Ryzen AI Max PRO, a new single chip processor with up to 16 ‘desktop-class’ Zen 5 CPU cores, RDNA 3.5 integrated GPU, and an integrated XDNA 2 Neural Processing Unit (NPU) for AI.

Both the HP Z2 Mini G1a desktop workstation and 14-inch HP ZBook Ultra G1a mobile workstation support up to 128 GB of unified 8000MT/s LPDDR5X memory of which 96 GB can be assigned exclusively to the GPU. As HP points out, this is the equivalent to the VRAM in two high-end desktop-class GPUs.

According to AMD, having access to large amounts of memory allows the processor to handle ‘incredibly large, high-precision AI workloads’, referencing the ability to run a 70-billion parameter large language

HP ZBook Ultra G1a and HP Z2 Mini G1a workstations are expected to be available in Spring 2025. Pricing will be announced closer to availability. www.hp.com/z The new HP ZBook Ultra G1a mobile workstation is built around AMD's newest single-chip processor

3D printing service provider 3DPrintUK, which was acquired by the TriMech Group and Solid Solutions back in 2023, has announced a £2 million internal investment that will boost its HP Multi Jet Fusion capacity by up to 60% and enable it to reduce prices by around 20% www.3dprint-uk.co.uk

Sweden-based Sandvik has announced its acquisition of ShopWare, MCAM Northwest and the CAD/ CAM solutions business line of OptiPro Systems, three US-based resellers of CAM solutions in its Mastercam network, in order to serve customers in the region and expand its customer base www.sandvik.com

To

change

expand

the

world,

we need to massively
the number of people who

are able to design new products and bring their ideas to reality. AI-powered 3D tools hold the key to driving that expansion, writes Backflip CEO Greg Mark

Today, the number of people who have ideas that could transform the world dwarfs the number of people who have the practical skills and ability to turn those ideas into reality.

In all of the narratives we’ve seen over the past decade about the need for reskilling or upskilling professionals amid technological upheaval, we have never had a discussion about the power and potential of helping more people achieve the ability to create 3D models and adding easier on-ramps to CAD, which I believe to be one of the most transformative technologies of the last several decades.

I’ve spent 20 years in manufacturing and design. I’ve been to automotive supplier plants where you’ve got five overworked engineers working in CAD, supporting 800 incredibly mechanically talented manufacturing employees who are operating and maintaining production lines.

The latter group always finds creative and resourceful ways to support and improve operations, but they just don’t do 3D design.

You’ll talk to them, and they have a torrent of pent-up ideas for how to make specific areas of a manufacturing line faster, with reduced scrap rates and better processes.

But those ideas don’t become reality, in large part because these workers can’t easily create a 3D model to plug into the infrastructure of modern manufacturing, like CNC machining and 3D printing.

But imagine if those 800 automotive workers were each empowered to more easily learn to design the fixtures, tools, or other process aids that they know would make their job easier.

We’d make better cars, faster, and at lower cost. And then plant engineers – who are spread thin working on many projects at once – could better focus, and more

responsively support other areas of the plant and more technically complex initiatives.

FUNDAMENTAL SHIFT

I see a fundamental shift coming in the ways that people are introduced to 3D design that unlocks a future that’s always lived in our imagination.

Other industries have broken down barriers for innovation and unlocked human imagination through education, tools and training. And the last five years has laid the foundation for us to build AI-powered tools that simply couldn’t exist before.

We don’t need to change our current 3D modelling programs, but we can help people adopt them faster and get started more easily.

At Backflip AI we’re building AI tools for the physical world, and my hope is that we can be a key part of this sea change.

With our first product, we’re already seeing people who have never touched traditional 3D design tools take an idea in the form of a photo, a sketch, or a text description, and turn it into a 3D mesh that can be 3D-printed into existence.

It’s not perfect, but we’ve heard from many users that have, for the first time in their lives, been able to get started creating what they previously could only imagine.

This is a stepping stone, and a way to help more people get familiar with 3D design before making the transition into existing, deeply featured CAD tools easier. In that way, we hope more people can help build the technologies that make our collective lives better.

ACCESSIBLE 3D DESIGN

In a prior life, I focused on transforming the downstream side of manufacturing, or how you get from an existing 3D model to a physical part faster and easier, by developing advanced carbon fibre and

Other industries have broken down barriers to innovation and unlocked human imagination. We don’t need to change our 3D modelling programmes, but we do need to help people adopt them faster and get started more easily

metal 3D printing technologies.

At Backflip, we’re now focused on the front end: making 3D design more accessible.

My team, which comprises some really smart software and mechanical engineers, has an intimate knowledge of how physical things are designed and manufactured, and the techniques for how to train cuttingedge AI models.

We’ve created state-of-the-art AI models based on our proprietary data, the world’s biggest synthetic data set of 3D models. Our models are constantly improving, following the exponential technology growth curves pioneered by AI companies like Midjourney and OpenAI. And we will keep getting better as we continue to refine our technology and grow our data set.

I believe we are on the threshold of a massive transformation in human innovation which stands to benefit all of us. What is important is the imagination in our heads, and technology, training and education can help lead the way into this next frontier.

ABOUT THE AUTHOR: Greg Mark is an American inventor, engineer and entrepreneur and the founder and CEO of Backflip AI, which has built a foundational model for 3D generative AI that turns ideas into reality. Prior to Backflip, Mark invented the process for carbon fibre 3D printing and founded 3D printing company Markforged. www.backflip.ai

Sara

El-Hanfy of Innovate UK

Since its explosion into the mainstream, artificial intelligence (AI) has begun to disseminate almost every corner of the workforce.

While the adoption of AI has immense potential to streamline processes in the workplace, some sectors – particularly the creative sector –have been slower to absorb this new technology into their processes.

From developing graphics to 3D design, AI can automate mundane tasks and to assist in the design process, increasing profits and unlocking the full potential of the various fields in the design landscape.

The creative sector in the UK is a powerhouse, employing over 2.3 million highly skilled individuals who have honed their crafts over many years. While AI boasts impressive capabilities in generating text, images and video, the adoption of this technology into the creative sector does not come without challenges.

It is no surprise that some creatives rightly feel threatened by this emerging technology. Companies looking to cut costs could deploy generative AI to develop and design content for their business, replacing the need to hire designers and laying off existing creatives on their teams.

But AI technologies cannot replace the intuitive thinking of human creativity. Instead, AI should be embraced as a tool to augment creatives, expanding possibilities and accelerating workflows.

When it comes to generative 3D modelling, AI technologies certainly hold potential but are unable to fully replace the skills and nuance of human designers working in the field.

While there are less concerns about job security in the 3D design sector for now, a shift in the way these 3D designers are working is necessary to take full advantage of AI capabilities to automate as many processes as possible, increasing productivity in the sector as a whole.

The traditional 3D design process is quite complex and laborious, with designers requiring a high level of skill to operate in this sector. To maximise efficiency in this sector, AI can be integrated to speed up the time-consuming aspects of design like duplicating tedious design elements.

AI can identify potential issues and suggest process adjustments by analysing data from previous manufacturing runs. Similarly, AI can be used to create complex geometries that were previously impossible or difficult to achieve with traditional design methods.

BUILDING FOUNDATIONS

While the creative sector stands to greatly benefit from the adoption of AI, uptake has been rather slow. Creative industries are vital to the UK economy, contributing £109 billion in 2021.

Without adequate investment from the government to overcome barriers to AI adoption, the full potential of the creative sector will remain untapped.

From training programmes to grants for overcoming financial constraints relating to AI adoption, government investment may be key in upskilling creatives on how to utilise AI most efficiently in the creative field to maximise creative output.

The Innovate UK BridgeAI programme, which is delivered by Innovate UK and its partners The Alan Turing Institute, BSI, Digital Catapult and STFC Hartree Centre, is ensuring that the foundations exist to support responsible technology development.

BridgeAI is fostering collaborations across the ecosystem through accelerator programmes, workshops, and toolkits to ensure those working in the creative sector are as equipped as they can be for this shift towards an AI-enabled future.

The Innovate UK BridgeAI programme has already demonstrated how targeted investment and collaboration can have a positive impact on low-adoption sectors.

Barriers to AI adoption threaten to stifle the powerful UK creative sector, but help is at hand through new programmes that aim to assist product designers to unlock their AI ambitions, writes

Without adequate investment from government, the full potential of the UK creative sector will remain untapped

One such collaboration was between Lancaster University’s School of Engineering and Batch.Works, a design and additive manufacturing business. Batch.Works’ design process for additive manufacturing was both manual and time-intensive. Funded by Innovate UK, this collaboration sought to revolutionise the design process for additive manufacturing through integration of AI.

By implementing AI-based support systems, Batch.Works minimised manual tweaking, reduced trial and error and enhanced overall product quality. In reducing the number of design iterations required, designers were able to focus more on creativity and on accelerating the design-to-production timeline. This resulted in improved efficiency, reduced costs, and enhanced competitiveness.

While AI does not have the potential to replace 3D artists, and should not do so, this technology will undoubtedly change how all 3D artists and those in the creative sector work in the future. With government investment and training in AI systems, the creative sector holds massive potential to boost its contributions to the UK economy. Through adopting this new tech, creatives are uniquely positioned to maximise creative output in the face of strict deadlines, acquiring new and valuable skills in the process.

ABOUT THE AUTHOR: Sara El-Hanfy is Head of Artificial Intelligence and Machine Learning at Innovate UK, a part of UK Research & Innovation. She works to identify, support and accelerate high growthpotential innovation in the UK, based on cutting-edge AI and data research and technology. www.ukri.org/councils/innovate-uk/

G5

G12

Parts

VISUAL DESIGN GUIDE NIKE X HYPERICE BOOTS

» Sportswear brand Nike has teamed up with California-based wellness company Hyperice to release the Nike x Hyperice boot, a mobile solution for recovery trialled by some of the world’s best athletes, including LeBron James and Sha’Carri Richardson

POWER UP

A battery pack situated in the insole of each boot powers three different levels of compression and heat, giving athletes the choice of running each shoe individually or synchronising them via a control button

PRESSURE’S ON

Once switched on, the boot starts working instantly, inflating and heating up the foot before massage begins, and inflating and deflating it periodically to apply and remove pressure

PERFECT FIT

Velcro straps secure the boot to the foot, and each size corresponds with two traditional American shoe sizes (for example, men’s size 10-12). In future, Nike plans to offer individual sizes

SUITED UP

A vest is also part of the Nike x Hyperice collaboration and uses instant heating and cooling technology to regulate athletes’ body temperatures in both warm-up and recovery

SIT BACK AND RELAX

Dynamic air compression massage is available through controls on the boots, warming up muscles before training sessions and easing foot pain afterwards

FLIGHT CLUB

The boots are also intended for use on flights, with the Normatec system promoting blood flow by raising fluid through the legs, reducing puffiness and acting as a high-tech version of compression socks

HOT STUFF

Dual-air Normatec bladders are bonded to warming elements that evenly distribute heat throughout the boot, driving it deep into the muscle to speed recovery

Development of the boot is ongoing and Nike and Hyperice continue to gather athlete feedback www.nike.com www.hyperice.com

AI IN PRODUCT DEVELOPMENT

» AI technology looks set to impact every aspect of product development: dreaming up concepts, building parts, and optimising and automating the way they are manufactured. With new AI-enhanced tools being launched now on a regular basis, and their forerunners rapidly improving in leaps and bounds thanks to increased training, this is a fast-moving trend that leaves many prospective customers struggling to keep up. To bring you up to speed, we’ve picked out a few notable companies working in some of the key sectors

TEXT-TO-CAD 3D GENERATION

While text-to-image technology is already widespread and popular in 2D work, 3D parametric models present a trickier challenge. That said, start-ups specialising in professional tools for building physical products are able to make some impressive

ADAM

Adam is the newest tool on this list, but only by a matter of days, such is the speed at which this market moves. Its USP is that it offers a more conversational platform than competitors, with a text-to-CAD interface that feels more like firing messages to a customer service chatbot than simply typing into the void. Sliders are a big help in controlling measurements, such as setting a wall thickness or a corner radius, for example. From there, parts can be exported straight to 3D printing or pushed into CAD. Adam is a member of Y Combinator’s Winter 2025 cohort, and looks set to benefit from its involvement with the start-up accelerator, which is backed by the VC firm behind Airbnb, Dropbox, Twitch and many more. www.adamcad.com

boasts about their capabilities.

The idea is straightforward enough: the user enters as much information about the desired end result as they can succinctly describe. What they get back is a 3D model of a part that can be edited further or exported to your CAD package.

BACKFLIP

Taking in user input text prompts or images, Backflip then generates multiple designs as high-res renders, before allowing users to edit further and generate a 3D model that can be exported as an STL or OBJ file. Backflip’s development continues to be rapid, with its founders, the team behind 3D printer company Markforged, looking to build the ultimate design tool for creating ‘real things’ and having ambitious plans to link their technology to existing CAD tools, parts catalogues and more.

www.backflip.ai

HP AI3D

HP’s AI3D Design Services software – internally named 3D Foundry – has come along just when the 3D printing mainstay is looking to unlock new AM applications. This simple toolset uses text-to-CAD to generate a design, a 3D lattice and then export it to 3D print using HP technology. HP executives have been keen to explain that they see businesses using the tools to support product customisation. While professional designers would design the key elements of a product in order to comply with standards and regulations, customers could contribute to other design elements, such as the lampshade on a light fitting.

www.tinyurl.com/text-to-3D

ZOO AI

The Zoo Text-to-CAD modeller generates B-rep surfaces, enabling 3D models to be imported to and edited in any existing CAD software as STEP files. That’s a contrast from other text-to-3D examples that generate meshes – which, once imported, can’t always be edited in a useful manner. Much of Zoo’s achievement is down to its hard work on the infrastructure behind Textto-CAD, which uses its own proprietary Geometry Engine, Design API and Machine Learning API to analyse training data and generate CAD files. The capability for a CAD model to be edited naturally makes generated models far more useful. www.zoo.dev/text-to-cad

MACHINING

With the manufacturing sector rarely out of the headlines, securing the future of how parts, moulds and dies are built is important. Compared to today’s often error-prone, time-consuming and expensive process,

CLOUDNC – CAM ASSIST

Single-click manufacturing is still be some way off for more complex designs, but CloudNC is making bold steps with CAM Assist, especially in 3-axis machining. This is available as a plug-in for Autodesk Fusion, Mastercam, Siemens NX CAM and Solid Edge CAM Pro. Once you’ve opened your CAD model and added in your tool library, machine, stock material and machining mode (3-axis or 3+2), CAM Assist creates the machining strategies, feeds and tool speeds. You then export the G-code and manufacture the part on your machine. CloudNC executives claim that its software takes users 80% of the way. Its ability to balance cycle time, surface finish and tool make it an impressive piece of technology. www.cloudnc.com

INFINITFORM

InfinitForm aims to increase the manufacturability of a design long before any metal gets cut, by offering tailored design constraints for parts in relation to the manufacturing processes you have available. Built by the founder of generative design tool ParaMatters, which focused on topologyoptimised designs for 3D printing, InfinitForm creates optimal, machine-friendly prismatic models from the design stage onwards. The software imports CAD and acts as an AI co-pilot during design stages, taking in the material and tool constraints upfront, leading to reduced cycle times and minimal post-processing requirements. www.infinitform.com

which typically relies on expert manual programming, an automated approach driven by AI and machine learning could be a massive win for product development. By making precision manufacturing more autonomous, it’s hoped that businesses

will see an improvement in the ease, speed and reliability with which parts are made. That, in turn, could lead to improved costs, resulting in more manufacturing getting reshored and supply chains becoming more local.

TOOLPATH

Another tool for 3-axis machining that integrates seamlessly with Autodesk Fusion’s manufacturing workspace, Toolpath uses AI to help automate time-consuming tasks like design-for-machinability analysis, quoting and CAM programming. Its ability to analyse parts, estimate costs and create a machining strategy aims to reduce bottlenecks and reduce repetitive tasks, freeing up the time and talents of machinists to tackle more complex problems. To help those machinists, Toolpath has constructed a guide that it says can teach anyone to use the plug-in in just 30 minutes. www.toolpath.com

ADDITIVE MANUFACTURING

The complexities associated with additive manufacturing (AM) come to a head at the build preparation stage. The specific requirements of every process, machine and material mean that

getting this right (let alone optimised) can be a tough challenge. Expensive failures are too often the end result. But AI can deliver better parts, more complicated builds and fewer errors through its ability to optimise

build supports, proactively predict failures and more. Some tools are even able to correct errors ‘on the fly’, increasing print yields and pushing AM further in the direction of mass manufacturing.

1000KELVIN AMAIZE 2.0

The second release of AMaize offers automated design printability checkers, optimised build preparation and intelligent adjustments to scan strategies and process parameters. By populating its Virtual Shop Floor with all the machines and materials you have, the software can suggest strategies that increase productivity and reduce print fails. AMaize’s tools for printability check the design before it heads to production and help identify distortions, overheating and shrinkage. A physics-based approach means that during build prep, AI generates support structures only where necessary, reducing material usage as well as the time it would take an experienced operator to manually add those structures. www.1000kelvin.com

AI BUILD

If you’re printing big parts, then AI Build may be for you. A hybrid platform for robotic and LFAM 3D printing, it combines AM with tailor-made CNC strategies, all through the same user interface. Different parts will require different slicers to put down material layers and create toolpaths for a build. AI Build can assist with all that, before enabling you to package your process into a ‘recipe’ that can be used again and again. Many tasks can be carried out using the AI co-pilot, which makes recommendations and offers smart setting defaults. Upgrades beyond the standard version include build process monitoring and defect detection. AI Build is available either on the cloud or on-premise for added off-grid security. www.ai-build.com

MATTA – GREY 1

Founded at the University of Cambridge by a team of AI and manufacturing engineers, London-based Matta is looking to improve build quality, increase 3D print yields and help automate the factory of the future. Equipped with advanced error detection, parameter prediction and optimisation capabilities, its Grey software learns from every print produced within its global network of 3D printers. Its initial toolset in Grey 1 is more machine learning than AI, but offers an exciting indicator for where the tech is heading. Able to correct errors on the fly, it can ensure quality prints first time, reducing failure rates and making AM more applicable to volume production. The software creates digital twins of printed parts using G-code, projecting quality measurements onto each extruded line, ready for inspection. www.matta.ai/greymatta

VISUALISATION

Slick visualisation tools have always been important in product development. But AI’s impact in this area looks likely to be utterly game-changing: stacks of realistic renderings, delivered as early as the concepting stage, which use for reference only quick sketches, a handful of prompts

KREA AI

Krea is not a single software, but an AI platform designed to support a wide range of visual content. Its tools span a whole array of professions, but there are some interesting options in there for product designers. Like other generative platforms, Krea allows users to upload sketches and generate renders, but its wider toolset includes features like Realtime, which enables you to turn 2D images of objects into 3D assets, or take XR sculpting tools and use the output to build detailed concepts. It might take a while to master everything that Krea has to offer, but going on what the product’s online user community is achieving, it can clearly unleash a whole world of creativity. www.krea.ai

NEWARC

Having recently taken more of a focus towards soft goods, footwear and fashion, NewArc is still an excellent tool for generating and editing product images. A simple sketch and some loose prompts can produce incredibly detailed renderings with realistic lighting across materials. The user interface is designed without particular skills in mind, so you don’t have to have created even a single product rendering in the past in order to be able to jump straight into what this tool can offer. Among its array of preconfigured styles, it even has a clay rendering mode for traditionalists. All paid accounts come with full privacy included, so that all of your sketches, prompts and images remain solely yours and can only be accessed from your personal account. www.newarc.ai

PROMEAI

Prome is a slightly more straightforward generative AI tool than many others. You enter a sketch, generate a rendering, and then expand it with prompts to adjust CMF and even produce short video clips. This is not to say it’s short of functions, as it has a fully stocked list of AI tools for crafting the perfect image, including AI background generation, and some with particular focus for tweaking the scene lighting. Since realistic, physics-based lighting is something that is often lacking in AI-generated content, any user who knows their way around photorealistic rendering software will appreciate the extra levels of control that Prome offers. www.promeai.pro

and mere seconds of processing time.

In a world that demands new ideas faster than ever before, generating concepts using generative AI software is fast becoming a norm, far overtaking the compilation of mood boards and lookbooks.

AI enables designers to explore multiple ideas, materials and colourways. The better

tools offer intuitive control of edits, even allowing you to train the AI on your own design history and maintaining design language as you hit generate. It’s not all about ideation, either; these tools work brilliantly for those who might not be the best sketchers, or for anyone wanting to bring a last-second idea to life in a client meeting.

VIZCOM

Vizcom is now a feature of seemingly every automotive studio’s software arsenal and offers designers an increased level of control over the sketch and edit workflow. The simple user interface lets users decide how much of their original sketch will influence the overall look, while a library of render styles can be selected, from soft pastels to eye-popping brights. It’s very clear that Vizcom has been designed by a team steeped in product design, with two features standing out: first, its Palettes feature, which allows users to train its AI with their own design language by uploading up to 30 example images; and second, Workbench, an infinite whiteboard where teams can collaborate, explore and elaborate on ideas. www.vizcom.ai

SIMULATION

AI is unlocking the upper echelons of what modern simulation software can achieve. Often, it’s doing so by speeding up the process, automating simulation setups down to a single click in place of hours of manual preparation. Or, it’s delivering improvements by

LUMINARY

Luminary is a simulation-as-a-service, high-fidelity CFD solver that relies on cloud-based compute power and makes the CAE workflow faster and easier, thanks to its Lumi AI co-pilot. Lumi AI reduces the time that engineers need to spend on simulation set-up, so that they can instead prioritise analysing results and optimising designs. Part of this toolset includes the Lumi Mesh Adaptation, which automatically generates physics-informed meshes for fast and accurate results that learn from existing solutions. Users can also take advantage of a minimal initial mesh generation feature, which reduces the mesh size required in order to achieve a target level of accuracy. Ultimately, that means a more cost-effective, faster simulation. www.luminarycloud.com

analysing huge amounts of data to highlight the points of a simulation that most matter.

Along the way, it’s opening up simulation tools to more engineers, enabling them to perform rigorous testing earlier on in the design workflow, so that experts need only inspect the higher-level details.

MONOLITH

Monolith’s goal is to get 100,000 engineers using its AI tools to cut their product development cycles in half by 2025. It’s a bold target for any start-up to set, but the company insists it is confident in its no-code AI approach and machine learning tools that can be used to build pipelines and interactive ‘notebooks’ for loading, exploring and transforming data for AI. Refined from hundreds of AI projects to find hidden errors, streamline test plans and build better products using users’ own historic data, Monolith’s tools allow for the testing and simulation of materials, component design simulations and manufacturability, using past data to help companies streamline their processes and focus on the most impactful areas. www.monolithai.com

NXAI

With its AI4Simulation toolset, NXAI offers particlebased simulations for modelling multi-fluid systems and fluid-material interactions. AI4Simulation’s first simulation project, NeuralDEM, is an end-to-end, deep learning alternative for modifying industrial processes such as fluidised bed reactors or silos. Aimed at scaling industrial and manufacturing processes, NeuralDEM captures physics over extended time frames. With deep learning, AI4Simulation looks to scale to millions of particles, but has an eye on the future, involving scales that surpass human understanding. www.nx-ai.com

PHYSICSX

PhysicsX claims that, once set up, its tools can simulate complex systems in seconds, automatically iterating through millions of designs to optimise performance while respecting manufacturing and other constraints. Any engineer can drive all parts of the workflow, even those parts traditionally managed by dedicated experts, with AI automating the most time-consuming tasks, minimising handovers and accelerating iterations. PhysicsX LGM-Aero is a geometry and physics model that has been pre-trained on more than 25 million meshes, representing more than 10 billion vertices, as well as a corpus of tens of thousands of CFD and FEA simulations generated using Siemens Simcenter STAR-CCM+ and Nastran software. www.physicsx.ai

HOW DO THE BIG

» They may have been slower to bring AI tools to market than some start-ups, but the big software companies have large R&D budgets, sizable workforces and masses of data at their disposal. Here, we get a glimpse from four senior executives on how they see AI and what might be in the pipeline at their companies

JEFF KINDER // AUTODESK

EVP of product development and manufacturing solutions

I believe we’ll see AI automate mundane tasks more and more across all industries, but certainly in design and manufacturing. In the next year, I think we’ll see AI surface even more in design to accelerate the creative process.

I believe we’ll see a fork in the AI road. The fascinating and novel capabilities that we’ve become accustomed to thinking about as defining AI, such as natural language prompts yielding fantastic images, essays and code, will continue to advance. At the same time, very practical, somewhat mundane AI capabilities will emerge. And these advancements will save us immense amounts of time. Soon, busy-work tasks that used to take hours or even days will be completed with a simple click of a button. And we’ll get that time back to do the creative work that humans excel at, while the computer focuses on computing. With disconnected, disparate products, organisations can only achieve incremental productivity gains. To see breakthrough productivity gains, data must flow seamlessly and be connected end to end. The productivity increases will be a welcome accelerant in and of themselves, but they’re also the fuel for building more AIpowered automation tools. but

MANISH KUMAR // DASSAULT SYSTÈMES

With 95% of companies anticipating that AI will improve product development, it’s increasingly important for organisations to ensure their AI systems are built on a foundation grounded in reliable, high-quality data. I believe 2025 will be crucial in laying a data-driven foundation that allows AI tools to thrive.

Many organisations lack a centralised platform to collect and manage the data that is vital for training AI models, which may introduce potential risks, such as inaccuracies in automated designs, lack of transparency in decision-making and potential security vulnerabilities that require careful management. Without a clear picture of all the data at your disposal, AI models also won’t function to their highest capability. This involves know-how in addition to hard knowledge. Key learnings are critical to input into AI models to complement facts and ensure that users do not make the same mistakes twice.

AI will be the determining factor that ultimately streamlines and optimises design capabilities, but 2025 can be considered a bridging year, for ensuring that AI models have all the data in place to ensure organisations are maximising their potential.

FOUR VIEW AI?

TODD TUTHILL SIEMENS DIGITAL INDUSTRIES SOFTWARE

VP of aerospace and defense

Using natural language to ask questions and interact with software allows new users to learn and use complex software more quickly and with less need for expert guidance. At the same time, experienced users can seamlessly automate workflows and speed up tasks. However, while industrial AI chatbots represent an important first step on the path of bringing AI into professional software, they should not be mistaken for the end goal.

In the increasingly complex and digitally integrated world of modern design and manufacturing, AI is uniquely positioned to connect people and technology in a way that plays to the strengths of both with AI moving many of the burdens of professional software away from the user. Over the course of the coming months and years, AI will not just be a novelty in industry, but a critical technology that will upend the way that products are designed, manufactured and interacted with.

Companies that fail to adopt AI will find themselves unable to keep up in a fast-paced world where competitors have continued their digital transformation maturity journey – a journey that will lead to an autonomous, intuitive and integrated design process far surpassing anything that exists today. are

I think AI is critical. I think our users must feel like product developers did when plastics or carbon fibre came along. It’s not just a better way of doing things; it’s a whole new set of tools that make you redefine problems.

AI allows you to approach problems differently, and so the baseline is important, not only for study but also for releasing products. Just like with the first plastic product, you can’t know what it’s really like until you use it. We need to build reps, to understand how to deliver and leverage the cloud-native solutions of Onshape.

Our system captures every single action as a transaction. If you drill a hole, undo it, or modify a feature, that’s all tracked. We have more data than any other system about a user’s activity, so we don’t need to go out and collect data manually. This gives us a huge advantage in training AI applications.

In the future, users might even be able to combine data from their channels, emails, and other sources, and create a composite picture of what’s happening. We’re working on ways to give more value without relying on manual collection. a huge advantage in training AI applications. channels, emails, and other sources, and create a composite picture of

Vivid Nine and R&S Robertson Shine a Light on Sustainability

Based in Edinburgh, Scotland, R&S Robertson has delivered exceptional lighting solutions for the hospitality and leisure sectors since 1939. However, as the lighting industry diversified in distribution markets and channels, the company realised it was time to forge a new path to stay competitive and embarked on designing their very first collection.

Vivid Nine, an industrial design agency also located in Edinburgh, began working with R&S Robertson to create the new lighting designs. Both share the same core values of environmental responsibility and a sustainability-first mindset. The new collection shouldn’t just be eye-catching—it should help reduce carbon, too.

Adopting new tools to meet ambitious goals

Friends since their days in university, Jonathan Pearson and Terje Stolsmo founded Vivid Nine in 2022 after heading up design at different industrial design agencies. With Stolsmo based in Norway, Pearson in the UK, and a limited, bootstrap budget, they decided to begin using Autodesk Fusion.

“We needed something that could work collaboratively across different locations,” Pearson says. “We’d have to buy a server and network licenses with Solidworks. The whole infrastructure was a lot more complicated, whereas Fusion does everything in the cloud. We can use it wherever we want. With Solidworks, you’d also have to pay quite a bit extra to get static stress simulation built in, and that is available in Fusion as-is.”

Vivid Nine was also able to leverage the sustainability tools available with Fusion, including its Manufacturing Sustainability Insights (MSI) Addon. This powerful tool enables them to calculate the carbon footprint of their designs, optimise products for reduced carbon emissions, and enhance sustainability reports they generate for their clients, including R&S Robertson.

Bringing style and sustainability priorities together

When Vivid Nine began its work to design the new lighting collection, they first started by interviewing interior designers. “One of the things that came out as a priority was sustainability,” Pearson says. “When asked, more than 85% said a sustainable choice was important to them. Being able to understand how different lamps compare to others could influence their choice as well.”

With this crucial knowledge in hand, Vivid Nine began its work on sustainable, Art Decoinspired lamps. From start to finish, the new Hudson collection was designed and ready for manufacturing in Portugal within eight months. But, during the process, they still grappled with the best way to capture its carbon footprint, relying on Excel sheets, siloed data, and assumptions. With

MSI, they could do a full lifecycle analysis to easily showcase Hudson’s sustainability data to R&S Robertson’s customers, improving the depth of the sustainability disclosures.

Additionally, MSI enabled Vivid Nine to discover that the biggest carbon contributors were the lower and upper houses for the lamps, contributing 15kg of the total 22.6kg CO2 emissions. This hotspot highlighted where significant improvements could be made, either through a material choice change or design optimisation.

The team also discovered a rather unexpected insight. Using MSI, Vivid Nine was able to compare the manufacturing impacts of different locations compared to R&S Robertson’s manufacturing partner in Portugal. The insights demonstrated the substantial value of producing in Portugal, highlighting improvements in carbon emissions and overall environmental impact when compared to manufacturing in Asia. In fact, the European location is also influencing their material choices moving forward.

“We’re doing more investigation into products that are even more marketable as a sustainable material, such as cork,” says Michael Lawrence, Marketing and Product Manager, R&S Robertson. “There are a lot of cork factories locally and near

our partners in Portugal, which helps us deliver even more environmentally friendly products.”

Moving forward with MSI

Since MSI offers unparalleled and real-time visibility into the carbon impact of various design and manufacturing variables—such as material selection, manufacturing process, and geography—directly in Fusion, it’s now become an integral part of Vivid Nine’s workflow.

“We can go through each component piece of the light and run calculations of different CO2 values to try and reduce it,” Pearson says. “We can compare different manufacturing methods or different sizes to reduce the number. You can have a load of parts. But, with MSI, it’s easy to see what stands out as a really high value from a CO2 perspective and focus on what to change and impact the overall value.”

Since adopting the tool, Vivid Nine has used the MSI Add-on for Fusion for many new successful—and sustainable—lighting products with R&S Robertson. “We have a range of 10 new designs coming out next year,” Lawrence says. “Our work with Vivid Nine and how they use technology such as Fusion and MSI really demonstrates the positive impacts that can be made both for your business and the world.”

Sustainabilityiscoretoourvalues, andit’salsoaknownbusiness differentiator.VividNineandtheir useofFusionandMSIhas—quite literally—shinedanewlightonhow weapproachourdesignstoreduce thecarbonfootprint MichaelLawrence, MarketingandProductManager, R&SRobertson

TEXT-TO-SPACESHIP

» Engineers at NASA Goddard Space Flight Center are using generative design to produce lightweight structures for space and taking their AI work on to the next frontier, in the form of text-to-structure workflows, as Stephen Holmes reports

If developing hardware is difficult, then developing hardware for space is on another level. Space agency NASA has more experience in this area than any other organisation worldwide.

“NASA is a cutting-edge organisation and we have an interest in staying on the cutting edge,” smiles Ryan McClelland, speaking from his office at NASA

Goddard Space Flight Center.

Located just outside Washington DC, Goddard is a nucleus of design, engineering and science expertise for spaceship, satellite and instrument development, making it a critical ingredient in NASA’s space exploration and McClelland, a research engineer, is looking to build processes in which AI is used to help design parts and structures, in order to accelerate next-stage space

to-CAD allowing AI to create a form shaped by the list of requirements ran against the large datasets available.

“There’s a real tension between doing things incrementally, so that they’re easy to be absorbed into the workflow, and taking these big leaps, right? So an example of a big leap, is where we’ve used text input,” says McClelland. “It goes through the text-to-structure algorithm and outputs a CAD model, a finite element

scientific missions. processes in which AI is used to help design parts exploration.

Generative design, he says, brings extensive benefits to this work: speed, stronger and lighter parts, and a reduction in the human input necessary. NASA’s total mission cost estimates are in the ballpark of $1 million per kilogram. McClelland shows us an example project, built using generative design, that shaved off 3kg. “The AI generates the CAD model, generates the

McClelland

design, does the finite element analysis, does the manufacturing simulation – and then you just make it,” he says. “What’s powerful about this is it focuses the engineer on the higherlevel task, which is really thinking deeply about what the design has to do.”

McClelland says this has changed the way his team at

McClelland says this has changed the way his team at NASA Goddard approach design work. “We realised that most engineers take their tools and immediately start designing, sketching and so on without sometimes really

fully understanding what the problem is.”

How he and the team now approach a project is by first arriving at a deep understanding of what the part needs to do and then allowing generative design software to build a part that ticks all the boxes. That enables them to get from a list of requirements to machined metal parts in under 48 hours.

TAKING GIANT LEAPS

Having brought generative design to the fore, the next step was to use AI to start accelerating the process, with text-

But from there, the technology can go even further, he explains. “Now we’ve gone and taken that so you can just chat with the AI voice or [use] text, and it creates a JSON file, a standard input file, and then that feeds in. So that’s radically different, right? It skips all the CAD and FEA clicks, and jumps right from, ‘I’m discussing what it is that I want with this agent’, to actually having what I want.”

McClelland explains that while these may indeed be big leaps, they are also laying the groundwork for incremental

model and a stress analysis report.” evolution in people’s workflows.

1

“In the text-to-structure workflow that I talk about, the underlying technology of generative designevolved structures is topology optimisation. So, there’s a topology optimisation engine in there that’s like a function call. And ideally, you can just pop that out, right? The Textto-Structure is somewhat agnostic to what that is. So I think what you’ll see is maybe user interfaces start to fall away.”

He believes that, while today the UIs are a differentiator between CAD softwares, the future will see engineers interface with these systems in a more natural manner. “[Like] the way that you interface with teams of people, or with people that you want to work with you - so, much more natural interaction, discussing things, sketching things, iterating in a collaborative fashion.”

“We’re basically at the next big paradigm,” he says. “I went and talked to the [NASA] engineer that took us from

drawing boards to AutoCAD, and then I talked to the engineer that took us from AutoCAD to what we primarily use now, which is PTC Creo, what used to be ProEngineer. And now we’re ready, I think, for another big step.”

He explains that the majority of the software designers and engineers use today was developed when engineers first had powerful computers sitting on their desk, but before the existence of GPUs, the cloud, and in most cases, the internet.

“I think a lot of those legacy programs have really just taken baby steps to embracing those new compute resources. That next big step is with AI. It really goes along with cloud and GPU and taking advantage of all the amazing things that IT development has to offer the field of engineering.”

SPRINT PROPOSALS

During the Covid-19 pandemic, McClelland used the slowdown to take a more in-depth look at what AI might deliver for mechanical engineering. Having been interested in the topic for decades, he began further research into its capabilities. The day ChatGPT went public, he says, was a major milestone in his thinking.

“If you asked my wife, I was wandering around in a daze. I couldn’t believe that we were living in this timeline. The ChatGPT moment brought a lot of new awareness to my work,” he recalls.

Trying out new AI tools at NASA begins with a sprint proposal, a small, 100-hour proposal that can be quickly attained. Having done “a bunch of market research”, McClelland got the green light to try out different AI tools. Once some promise is identified, the next stage

focuses on finding applications.

In a facility packed with scientists, engineers and researchers, McClelland began talking to people that might have a need for these AI tools. “I found real applications to apply these to, and that’s how you know if an idea really holds water, when you try it with real

OUT OF THIS WORLD

might have a need for these AI tools. “I found real needs,” he says.

“We’ve had over 60 applications now, and as you have more applications, you start to find all the edge cases. What are all the complicated factors that you need to take into account that maybe you didn’t think about at first?” To this point, this approach has worked well, resulting in

the Evolved Structures Guide.

With text-to-structure working well, the next stage is to make it more generalisable, says McClelland, so as to cover more of the edge cases. “Which means making it more complex over time, by using real applications on it.”

1 The

Getting others to adopt this method is challenging, much like getting people to switch to a new CAD software, he says, but can be achieved with training and by explaining the promised benefits.

“You’ve got to demonstrate it. You’ve got to build things. You’ve got to test. That’s what matures the technology, so that it comes into broad usage,” he says.

McClelland says that the full power of AI is, as yet, “under hyped”, and that a lot of people are not factoring its potential into their

Getting others to adopt this method is challenging, that it comes into broad usage,” he says. thinking.

edge AI tools that we can access

“Fortunately, we have an AI that we can access internally. We have cuttingwith our work data. But I was at a conference recently and I asked people in the audience how many of them can use one of these foundation large language models with their work data with official approval. And, you know, very few people raised their hands.”

The challenge is lining up the right applications that it can be tested on – the hard bits, he says. “That’s where you find the challenges. Just like how agile software development approaches are used, so that you can get something out there, you have to see what the use cases are, and then iterate, release, iterate.”

A flagship future mission is the Habitable Worlds Observatory, the first telescope designed specifically to search for signs of life on planets orbiting other stars. McClelland hopes that aspects of its design can be accelerated using these AI technologies.

“It’s huge – the size of a three-storey house. You’re going to have these big truss structures, so generating those automatically would be very high-value.”

With no room for error, using AI to generate such structures would prove an incredible benchmark for McClelland and the engineers at Goddard. What it could lead to –and where it might take us next – is entirely out of this world. www.nasa.gov/goddard

● 2 This generatively designed

3 Ryan McClelland at NASA Goddard Space Center, inspecting parts ready for use in space missions

4 This generatively designed hatch closure mechanism captures and retains orbiting capsules of sample materials collected by rovers on Mars’ surface

Habitable Worlds Observatory may be one of the first flagship NASA missions to benefit from structures generated with AI assistance
structure for the ALICE: Optical Bench mission has been CNC machined from a single block

TOP OF THE CLASS

» How do modern design courses tackle the subject of artificial intelligence? Stephen Holmes speaks to leading educators about how they see the future of AI and the role that today’s students will play in taking AI tools and skills into workplaces that may still be coming to terms with the technology

When a story hits the headlines about the use of artificial intelligence (AI) in education, the focus tends to be some egregious example of plagiarism, fakery or misinformation.

In product development courses, however, AI has a more constructive role to play. And that’s leading to some vital conversations, not only around the possibilities that this technology brings to design work, but also the extent to which future employers will value job candidates with a sure grasp of how to apply it.

At Wayne State University in Detroit, Michigan, associate professor of teaching

Claas Kuhnen is a vocal advocate for AI software, who believes that exposing design students to this technology is essential.

“AI should be seen as a tool to modernise labour-intensive workflows, not as a shortcut to replace effort,” he says. “In the future, designers won’t be replaced by AI, but those skilled in leveraging it will have a competitive edge.”

It’s a view echoed by Dr Robert Phillips, a senior tutor within the School of Design at London’s Royal College of Art (RCA). “I think what we should be doing is asking

what jobs we want AI to take, so that we can retrain people to do better jobs,” he says. “I think we need to elevate how we’re talking about AI to focus on what we actually need it to do, and what – more importantly –we’re going to feed it.”

Phillips sees this as a topic on which the RCA has always focused. Rather than simply teaching design, he says, the emphasis at the college is on redefining the purpose of design and the directions in which it’s heading.

“I think ethics are key,” he adds. “Let’s take responsibility for AI and be part of the journey, rather than let it happen to us.”

● 1 This rendering for a suitcase design was built by Claas Kuhnen to demonstrate the AI power of Vizcom to his students

● 2

● 3 First, a suitcase design is imported (a 2D sketch or screengrab of a 3D CAD model), along with a stock image of an airport

● 4 Second, an image of a human is added and composed alongside the suitcase

● 5 Finally, using prompts, an initial generated image is edited to fit requirements

REVOLUTIONARY ROAD

Much of the design process centres on developing concepts and presenting them visually, says Claas Kuhnen, and this is an area where AI-generated images could be revolutionary.

“For example, when creating a new consumer product, we conduct research and begin ideation, exploring form, materials, and colours. Yet, brainstorming fatigue or reliance on familiar ideas can limit creativity,” he points out.

Instead, AI tools can offer advantages at multiple stages, not just by generating forms, but when looking to uncover details to inform a design project during the research phase, says Paul Russell, a teaching fellow in design at Loughborough University in the UK who regularly works with students to develop their CAD and visualisation skills.

“In terms of efficiencies, you could spend hours or days researching on Google and get back the same information that you could get back in a few minutes from ChatGPT – and it’s quicker to do that and verify it than it is to do it manually,” he says.

AI’s generative capabilities are also capable of offering up ‘happy accidents’ too, says Kuhnen. Much like Newton’s falling apple, a generated image might suggest materials or shapes that designers might otherwise not consider.

Naturally, this could lead to a debate of how much of a particular design is the student’s own work. It’s true that many of the products that surround us all were invented ‘by accident’, because inspiration can strike in random and mysterious ways.

At the same time, being able to judge the quality and originality of a job candidate’s design work remains a big concern for employers.

“We’ve always had this issue,” says Phillips at the RCA, but he says it’s like comparing a product made by hand with one made using a CNC machine.

“They’re different things!” he says. In the latter case, somebody has still had to create a file to be able to complete the CNC machining and make it possible to replicate, he says. “So someone has still taken the lead in the creative process.”

He believes that referencing will be key. In effect, a design project should reference its AI sources, just as another project might provide credits relating to the photography it uses. Design agencies often employ professional photographers and credit them for their work within a project. The same should be true with AI, he says. “Everyone needs to be really open about it, and I think we need to embrace it.”

GENERATION GAP

Getting everybody to embrace AI in the workplace may be a sticking point, however. Educators acknowledge a growing divide, based on age, in terms of how AI tools are viewed. “It’s clear. Anyone over about 50, maybe late 40s, is apprehensive, while all the young people are excited and interested in it,” says Russell. “When CAD came about, there were loads of old people saying this is BS, it isn’t proper drafting, you know? And then young people picked it up and loved it. It was the same with rendering.”

He sees a clear pattern emerging, however. Put simply, the better students make the most of what AI tools can do, much like with any other technology. “I don’t think this is just true in design. I think I’ve seen it all around, with people using AI for coding, videography and more,” he says. “If you’re good, AI makes you better. If you don’t put in the effort, and you don’t have some of the skills, then it doesn’t enhance you as much, which I think is kind of scary because we want an equal playing field. But these tools actually exaggerate the difference between students.”

If students refuse to engage or put effort into other aspects of the course, he continues, there’s nothing to suggests that the results they produce using AI will sweep them to the top of the class. “With AI, you have to learn through doing. If they don’t put the time in, they don’t get quality results out.”

For him, it’s about giving students the baseline skills, knowledge and understanding, and then letting them run with it. They typically go above and beyond, he adds. In fact, what he teaches about AI to next year’s students will almost certainly be based on what he learns from the current year’s students.

At Wayne State University, Kuhnen agrees that AI tools have their limitations, and that ‘quality in, quality out’ is a steadfast guide. With visualisation, to get usable results, users must feed AI quality hand drawings, understand light and shadow dynamics, and compose images with correct perspective. “These foundational skills remain essential to create believable concept renderings,” he says. “Ultimately, AI doesn’t replace creativity; it complements it. By automating repetitive tasks, it frees designers to focus on what truly matters: innovation and artistry.”

BACK TO SCHOOL

At the Design and Technology Association, director of education Ryan Ball says secondary school educators have

‘‘
If you’re good, AI makes you better. If you don’t put in the effort, or don’t have some of the skills, then AI just doesn’t enhance you as much Paul Russell, Loughborough University ’’

often had to fight to get the curriculum updated to reflect new technologies. AI will be no different, he predicts.

For example, until recently, limitations on how much CAD and CAM could be used on GCSE projects in England and Wales forced students to ‘throw in’ handcrafted pieces and hand-drawn parts alongside their parametric CAD models and 3D-printed prototypes, just to satisfy the requirement for ‘traditional’ practices, he says. “How can AI be any different?”

Each use of AI – be it text-to-image, image-to-image, generative design, AI-powered FEA, image enhancement and so on – just involves tools, he says. “The skill lies in deciding when to use them.”

AI could also make the subject of Design & Technology more accessible to a wider range of students, he adds: “There are creative students who cannot draw well or get their ideas out their heads to share with others. AI offers a potential lifeline.”

One examination board suggests that, in order to attain the highest marks when generating designs, students must demonstrate “imaginative use of different design strategies for different purposes and as part of a fully integrated approach to designing.”

As Ball points out, that sounds like a “perfect chance” to put AI to work.

Whichever way you look at it, the adoption of AI could well prove a turning point for education, and provide course content with a real shake-up.

“Do we still need to teach modules around 3-axis machining when [because of AI tools], it’s almost at the point of ‘One click, done’?” asks Russell at Loughborough University.

“I think at some point, we’ll have to trust these AI technologies, right? It’s like autonomous vehicles. The common point is that it’s inevitable now.”

On the whole, most view the use of AI by students as a positive, although some issues raised include a deepening of the divide between the haves and have-nots.

And, without any overarching curricula or industry guidance for how AI is taught, encouraged and explained, it could potentially widen the gap between the best university courses and the rest of the field.

Ball suggests that the divide looks even greater at a secondary school level, where a drastic lack of IT infrastructure, especially in D&T, and even in some cases the banning of mobile phones, may indeed widen the gap further between some centres.

If AI is set to automate repetitive work and free designers to focus on innovation and artistry, then those without access to AI might struggle to reach the same heights as those that have it at their disposal.

“Ultimately, AI doesn’t replace creativity; it complements it,” concludes Kuhnen. “In the end, 3D CAD models and accurate product renderings must be crafted by hand, as AI lacks the creative precision required to meet the high standards we demand.”

While the tools used by today’s students looking to join tomorrow’s industry are advancing at pace, the fundamentals are still necessary to get the best from today’s AI software.

AI-DRIVEN EFFICIENCY

» Vital Auto is rewriting the playbook on evidential design loops in automotive design and innovation, using AI tools that were built in-house at the Coventry-based company

For Shay Moradi, head of technology at industrial design studio Vital Auto, AI is “a suitcase term”. It is commonly used to refer to many slightly different things, he explains: automation, machine learning, generative AI, and so on.

“We’ve not even hit peak fatigue on the term and it gets bandied about so much that it tarnishes some of the shine of doing genuinely useful things,” he says, speaking to DEVELOP3D as he travels back from an event in France to Vital’s headquarters in Coventry, UK.

Vital works with OEM design teams at automakers around the world and has spent the last decade pushing digital software and hardware to the limits. Its innovative work has embraced 3D printing and XR, for example, in order to help its clients prove out their designs, whether they’re for public-facing show cars or top-secret research projects.

In recent years, the company has focused a great deal on building its own tools. “You can deliver perfectly functional software with a very small team, using AI coding tools or low-code tools that incorporate AI functionality through APIs. In our case, we do both classic development for our clients and we prototype things fast and efficiently for internal use,” says Moradi.

He has incorporated Vital’s AI tool development programme into its design and prototype manufacturing process, continuously trialling the latest tools, speaking with global experts, and collaborating with highperformance computing, data analytics and AI research facility, the Hartree Centre.

Moradi believes that every small to medium-sized business has the capability to create its own AI powered tools. “The kind of AI that most people want is a tool, one that preferably disappears, not behind a chat interface, but one that gets embedded in things we already do. It’s a tool that is so customised and bespoke that it serves our individual needs, preferences and the professional context in which we operate.”

AI TOOLS BUILT IN-HOUSE

In that respect, employees at Vital are no exception. The team there has developed two proprietary tools: the AI-powered pose estimation tool Capture, and its complementary component, Eye Track; as well as the Vital AI Suite, an internal tool that is intended to simplify some of the information processing capture and analysis that complements Vital’s design production and prototyping activities.

Andy Shaw, a director at Vital, adds that as industry transitions into software-defined vehicle (SDV)

architectures, vehicle design is no longer a static process. “SDVs rely on over-the-air updates, digital twins and software-defined ergonomics. It’s a continuously evolving system where software updates, driver behaviour insights and real-world telemetry all influence performance and ergonomics, post-production,” he explains. “Having sophisticated tools that enable us to simulate human interaction, capture real-time feedback and refine UX dynamically is more critical than ever.”

Capture and Eye Track play a key role in this, by helping design teams validate ergonomic and interaction decisions, not just for the initial design, but also for continuous software-driven updates that define the SDV experience. Capture’s role is to provide ergonomic precision for vehicle occupants by analysing body postures, movements, and contextual scenarios, in order to map ergonomic constraints and physical interaction flows. These are displayed in CAD at a later stage, so that designers can use the information accordingly.

By using ergonomic data combined with transcripts of participant conversations and the eye-tracking tool Eye Track, Vital can demonstrably prove user intent and actual usage in real time, with the ability to play back and analyse again. “It embeds evidence at every stage of evaluating and proposing design attributes and key decision-making, before something gets to the engineering phase,” says Moradi.

Recently, Capture proved its value on a project for zero-emission hydrogen truck company HVS Trucks. It was used during the iterative design evaluation stage to design a better, more aligned cab experience for drivers.

“Our work showed optimal placement of HMI screens, as well as the interior configuration and external points used during ingress and egress, and we used this to influence and feed back into the design during the course of our prototyping activity,” explains Moradi.

“The future of automotive design isn’t just about faster printers or sleeker clay models. It’s about building properties and systems where every iteration is a data point, every prototype a hypothesis tested,” he states.

For Vital, AI is no longer a buzzword, but a very real bridge between human intuition and computational precision during the design phase. And as the tools used evolve alongside AI technology’s growing capabilities, designers and engineers have a chance to shape these tools themselves, Moradi says. The end result? Unprecedented agility, fail-faster development cycles, deeper learning and fearless innovation. www.vital-auto.com

AI tools help the team at Vital Auto to validate ergonomic and driver interaction decisions

PROTECTING INNOVATIVE

» At Yamaha Motors, a proprietary smart contract tool from Final Aim is helping to keep the company’s intellectual property and creative designs safe from unauthorised use or outright theft

In the huge halls of the Tokyo Auto Salon, Yamaha Motors has just unveiled its latest concept, the Diapason C580, to an eager crowd. An agile, lowspeed electric platform, the versatile design means the vehicle can be customised to carry two passengers over a wide range of terrain from forest floors and farmland to factory complexes and even residential communities.

One version of the C580, in full utilitarian guise and equipped with a dozer mounted on the front and a trailer packed with work tools at the rear, stands in startling contrast next to Yamaha’s latest design for the futuristic Lola Yamaha ABT Formula E Team race car. Another C580 is showcased as a luxury, off-road leisure vehicle.

At the Tokyo Auto Salon, the Japanese equivalent of the US’s SEMA auto show, customisation is a big trend and the C580 offers a concept that goes beyond ‘just’ cars, paving the way for next-generation mobility design.

The group behind the Diapason C580 concept is Yamaha Motor’s new business development division. This is set on uncovering new niche markets, with the C580 following on from 2024’s agriculture-focused Concept 451.

This division is in the process of further developing the brand’s use of AI in generating new concepts. The Concept 451, for example, featured generative AI at the heart of its development work, with creatives, decisionmakers and manufacturers all brought in on the process of using AI to accelerate ideation.

This exercise saw over 2,000 design concepts generated,

while also helping to shorten design cycles. However, creating so many designs, all with unique details and potential, posed a new challenge: How does Yamaha best protect its highly valuable intellectual property (IP)?

To help with the process Yamaha brought in Final Aim to help develop the design and use its expertise in generative AI, including its proprietary smart contract tool, Final Design.

Fast forward to the show floor of Tokyo Auto Salon in 2025 and Final Aim was again brought in to assist Yamaha’s team with the C580. In the year since the initial unveiling of its low-speed EV platform, Yamaha had identified several niche applications where the tool could be utilised, using generative AI to help create concept designs to fit the needs of eight new markets.

DOUBLE-EDGED SWORD

Final Aim was founded in 2019 by serial entrepreneur Masafumi Asakura and ex-Nikon designer Yasuhide Yokoi. By 2022, the company had established its headquarters in the US, joining the Berkeley SkyDeck start-up accelerator.

What the Final Aim co-founders recognised was how generative AI tools present a fascinating double-edged sword for creativity. In short, AI can massively amplify and augment capabilities, but also present challenges around the ownership of a design.

AI aside, there are many issues around IP to which designers can relate, like having seen their designs used without permission and being left powerless because they weren’t prepared for that scenario.

“Designers almost never think about IP until they have to – whether that’s as simple as doing the client handoff, sharing out for revisions, or in a worst case scenario, where they’ve got to take some legal action,” says Pooria Sohi, Final Aim’s marketing director.

“If you’re a freelancer, you might be using a cloud drive to back your data up. If you’re a full-blown ID Studio, you might be doing some kind of encryption, but that’s really the beginning and the end of it.”

Final Aim sees the adoption of blockchain-based smart contracts technologies across design and manufacturing as a means of overcoming these issues.

INNOVATIVE IDEAS

Its platform, Final Design, proposes a single source of truth for essential data, from design files to legal contracts and IP documents. Securely authenticated and stored in a traceable, tamper-proof format, it offers project managers oversight and control, while automatically assigning IP to work, tracking versions and keeping updates secure, from early sketches to final production files.

“The way that the system works is that every time you upload your data to the platform, the system triggers a smart contract to securely log it on the blockchain,” explains Sohi. “We have done this in conjunction with lawyers and it’s been reviewed internationally at a bunch of different law firms to make sure that this actually holds water in the worst case scenario, which is needed to contest that the intellectual property is yours.”

He adds that this is pertinent to other applications, from something as simple as packaging up and handing over designs to your client; giving temporary access for revision control; or simply letting someone look at your work. It’s an issue even for freelancers, wanting to showcase their work online without having to worry about another company coming and taking it.

KEEP IT PRIVATE

Generative AI, and the thousands of concepts it can generate in mere seconds, adds even more layers to the issues around protecting IP. If you’re using a private server and you’re doing it privately, asks Sohi, how do

you then assign ownership? “That is the question we’re answering,” he smiles.

Sohi gives the example of the US government’s plans to deregulate AI as an example scenario where companies and designers are realising they need a better way to protect their work.

“For creatives, that presents a significant problem, because how do you demonstrate IP ownership? How do you demonstrate that something you’ve created is humanmade and not AI-made? And then how do you embrace both AI and traditional technologies to use them both?”

By leveraging Final Aim’s Final Design platform, Yamaha was able to explore how generative AI can supercharge product customisation, scene visualisation and creative video content for concept promotion. The exercise highlighted over 20 categories of customisation ideas, generated over 2,500 design images, and led to nine final concepts and AI-generated videos – all with the peace of mind that all this information is recorded and safeguarded as Yamaha IP.

By helping produce these visual narratives for potential use cases, the tools have helped rapidly ideate designs for new markets and bring multiple concepts to life, not only for internal development, but also in front of an audience of thousands, giving direct feedback at a show like the Tokyo Auto Salon.

By keeping the IP safe, designers can open up projects to the public earlier, helping them get to a desirable end product faster.

lp.finaldesign.io

Yamaha used generative AI to help showcase different use cases for the concept

● 1 The Diapason C580 from Yamaha is a two-seater electric mobility concept model
The versatile design aims for huge configurability

VISUALISATION AND

» As a freelance visualisation artist, Spencer Livingstone feels pretty positive about the role of AI in his work, as he tells DEVELOP3D’s Emilie Eisenberg

London-based 3D visualisation artist Spencer Livingstone has a passion for bringing designers’ concepts to life in bright colour and vivid detail. In the process, he plays an important role in their overall design workflows.

“Many product designers will use KeyShot or another software to render things, but they don’t have enough time to make [the renders] look real. So that’s where I come in,” he explains.

“I take prototypes that product designers have made, and I will spend my time visualising them at multiple angles, adding materials and going into lots of depth and detail to make sure that it looks as photorealistic as possible.”

3D rendering and visualisation is expected to be heavily impacted by artificial intelligence (AI) – and Livingstone sees positives and negatives to this trend. Although he was initially concerned by the idea that AI could put him out of a job, subsequently seeing how it might be implemented usefully in the design process has persuaded him that it may provide more good than harm. That said, he feels it may not have developed the necessary maturity to do that yet.

● 1 Livingstone’s visualisation work at Astro Lighting

● 2 A prototype design for potential investors

● 3 A Joseph Joseph design visualised for retail partners 1

“In the industry we work in, there was at first this idea that maybe everyone’s going to be made redundant. I think it’s come around. There will absolutely need to be people who know how to use that software. And there will absolutely always need to be humans still working alongside it. It’s not going to be something that makes everyone jobless and has factories running itself and designing things and everything like that. It’s not actually that good yet for all of this stuff. So I definitely think the anxiety is a little bit too far in advance for the actual capabilities of AI.”

SET THE STAGE

At KeyShot World 2024 in London, Livingstone presented examples of his work, including lamps and light fittings for Astro Lighting, storage solutions for Joseph Joseph, and advertising content for an environmental start-up working on methane reduction solutions. KeyShot, he explained, is integral to his process for its user-friendly interface and quick formatting of detailed mock-ups.

art director. You’ll need someone to

“I know that, one day, I will probably be able to take just one picture and AI will be able to do anything with it. But I still think you need someone to input this data. You still need someone like an art director. You’ll need someone to understand how it works, so there’ll always be a place for visualisers like us to be there to input,” he says.

After completing his Master’s degree in Information Experience Design at the Royal College of Art, Livingstone worked for Derbyshire-based lighting brand Curiousa, using KeyShot to visualise glass lighting designs created by the brand’s design team.

“I concentrated more on visualisation at that point, to which I thought, ‘Okay, is this a career? Should I start to build my visualisation portfolio?’”

The answer to that question was

The answer to that question was clearly, ‘Yes’. Following his role at Curiousa, Livingstone worked at Astro Lighting, a British lighting manufacturer

AND THE RISE OF AI

providing lighting for hotels and retailers including John Lewis. When a vacancy at Joseph Joseph became available, friends who worked in the industry reached out to him on LinkedIn, thinking that he’d be a good fit for the role. That’s where he’s been working ever since, building up his contacts and getting ready to make the shift to freelancing full-time. At Joseph Joseph, Livingstone built a 3D asset library of packaging, products and promotional content.

When it comes to his process, Livingstone avoids sketching and uses ideas he sees on Instagram and Pinterest for inspiration, skipping straight to the 3D mock-up stage.

“The reason I don’t like sketching is because it’s frustrating. I couldn’t get the ideas in my head onto paper. No matter how much I could see this perfect object in my head, every time I sketched, all the proportions would be wrong, and it just didn’t work,” he explains

“I just started mocking stuff up in 3D. I feel I’m quicker at mocking up stuff in 3D than I am sketching. I don’t have to fill out all the edges and make it render-ready. I just want to have an idea of what it looks like. So I iterate an idea through CAD.”

He uses a combination of software for different aspects of the design process, including Blender, Autodesk 3DS Max and Adobe InDesign, but KeyShot remains the easiest and most effective in Livingstone’s opinion. “I never really stick to one software. I always try to dabble in others,” he says.

“I definitely stick to KeyShot, Fusion and 3DS Max, because you get so many different add-ons where you can do simulations. And I want to start using tyFlow as well, because that uses simulations for water and cloth.”

GOING SOLO

six years old, but his imminent departure from Joseph

Livingstone’s freelance business, S.VIII, is already over six years old, but his imminent departure from Joseph Joseph will make this the first time he has ever been fully self-employed.

“I’m trying to stick to what I’m really good at, as a good foundation, such as products, interiors, smaller animations where things are moving around. But I definitely want to get into the realms of doing stuff that’s more simulated, so more cloth and so on. It’s just going to depend on what kinds of clients I work for,” he says.

“I always want to make sure that whatever I’m doing for a client is as far as I can go, always at the pinnacle and always pushing. Whatever I do for a client, I want to try to learn something from every project. So far, it’s sticking to my bread and butter – products, interiors, animations –but I’m definitely hoping that I can work with some really cool people and try some new things in the future.”

still sees his role based around

Despite AI’s inevitable infl uence in the 3D visualisation industry, Livingstone still sees his role based around traditional visualisation methods and human input. After all, in the end, it’s the human eye that his work is trying to convince is real. www.sviii.co.uk

A VIEW OF THE FUTURE

» At a recent series of launch events, Siemens has shown off the results of its XR collaboration with Sony. DEVELOP3D’s Emilie Eisenberg got to grips with the new headset and heard how software from Siemens aims to bring XR to life

At the UK Siemens Immersive Engineering event, held at the Advanced Manufacturing Training Centre in Coventry, several of Sony’s ‘immersive spatial content creation systems’ are set out, ready to be tested.

The event is part of a global unveiling, following the headset’s award-winning appearance at CES just weeks earlier. At locations across three continents – Madrid, Bologna, Paris, Eindhoven, Yokohama, Stockholm, Chicago, Seoul, Oakville and Coventry – guests from different industry sectors are getting the opportunity to try out the headset for themselves and to learn more about Siemens’ plans for the extended reality (XR) integration with its NX X software.

The Sony XR HMD, or SRH-S1 as it was named at its initial unveiling back in January 2024, comes with a number of designer-friendly features, from its flip-up visor to its precision controllers.

The handheld controls are simple. They include a stylus for the user’s dominant hand, with one button in easy reach of the index finger and one in reach of the little finger. For the non-dominant hand, there’s a ring that features two buttons to transform it into a tool for pinching things and picking them up.

The headset is fully adjustable and can be worn with glasses underneath. A 4K display offering 3,552 x 2,840 resolution per eye offers clear visibility. And importantly, the nauseous sensation that was sometimes triggered in users wearing earlier generation headsets is nowhere to be found here.

‘‘ What we’re trying to do is reduce the boundaries between the immersive environment at the desktop. It’s about the ability to do it all in the same place without having to constantly disconnect from one to do the other ’’

Although the headset is heavy, it is nicely balanced, although it may take some users time to get accustomed to its weight. Outer components are made from plastic rather than metal, helping reduce bulk, and a single cable connects the headset to a monitor. In this way, even those who aren’t wearing the headset get the opportunity to view what the headset user is seeing.

Visually, the headset’s interface is simple. Users can view a design in 3D and apply the tools to pick that design up and look at it from different angles, to take out individual parts for inspection, and to slice a design in half and study it in cross-section. Users can also open a browser window and place it next to the design, editing in the browser by using the stylus as a mouse and watching the edits change the design in real time.

The XR HMD is compatible with NX X, Siemens’s cloud-based CAD and data management software. NX X’s latest update, which is automatically integrated into the headset, allows designers across multiple locations to make notes and edit a design. Multiple designers can create avatars and enter a space together, which makes examining and discussing a design easier to achieve than it would be via a monitor.

A DIFFERENT EXPERIENCE

Chris Abbott, Siemens Digital Industries pre-sales manager for product engineering, simulation and test solutions, explains that interacting with a design using the headset provides a different experience when compared to using a traditional monitor.

The flip-up visor makes it easy to switch between working modes

Sony SRH-S1 is designed for comfort and usability
Precision controllers are provided with the headset

FUTURE

He gives the example of sitting in a car. While you can see that a person will fi t in that car on a monitor, does it tell you what it is actually like to be in that car?

“You don’t understand that perspective until you actually have that level of immersion,” he explains. “And this allows you to make those changes and be inside that vehicle again, very quickly.”

NX X also includes value-based licensing, a system designed to help Siemens customers use products that they might not typically prioritise when budgeting for the year. Through value-based licensing, customers can purchase tokens in increments of 50 and 100, which they can use on a product of their choosing.

This means that if a company only needs to use a software package a few times, instead of purchasing the programme as a standalone order, it can use tokens, which will return into a pool when its designer has finished using the software.

“We do a lot of work with automotive start-ups,” explains Abbott, as an example of where this new licensing scheme might prove useful.

The list of things that product designers and engineers in that industry have to consider – mechanical design, electrical systems, ergonomics, vehicle packaging, pedestrian protection, materials and composites – just goes on and on, he says.

“Buying all of these add-ons becomes very expensive for somebody who’s working within a budget, but they can achieve that with value-based licensing.”

The integration of NX X, meanwhile, means that a design team can use both monitors and the headset in its workflow, switching between a screen and an immersive environment to view the product in 3D without having to create expensive prototypes.

In future, Abbott envisions scenarios in which all the members of a team will use the XR HMD, and use the browser in the headset instead of a monitor. The way Siemens software is licensed, he adds, means they can integrate more features than previously possible.

“What we’re trying to do is reduce the boundaries between the immersive environment at the desktop. It’s the ability to do it all in the same place, without having to constantly disconnect from one to move to the other,” he explains, highlighting the fact that NX X can be opened normally in the headset with the product right next to it. It was interesting to step back at the event and look at others using the HMD, some for the first time, and see how quickly they became immersed and handy with the unique controllers.

“Once they’re in, and they get used to the weight of it and everything, the response is overwhelmingly positive,” observes Abbott.

The collaboration – Sony’s well-considered headset combined with Siemens’ functional and affordable software – looks like one that could finally bring viable and valuable XR to designers’ desktops. www.sw.siemens.com

FIRST LOOK: ONSHAPE CAM STUDIO

» Cloud-native CAM tools have finally arrived at Onshape. Stephen Holmes gets an early look at the offering, an easy-to-use toolset backed up by serious speed that will delight current users and perhaps turn the heads of newcomers

It is almost a decade since Onshape broke cover, bringing CAD tools to the browser. Since then, the product set has evolved steadily. While the CAD customer base hasn’t completely shifted over to fully cloud-based CAD just yet, Onshape’s arrival and subsequent acquisition by PTC in 2019 signalled a fundamental change to the market that has prompted other CAD vendors to slowly shuffle in the direction of web-hosted applications.

Onshape is loved by both those companies that depend on it (often start-ups, SMEs and educational institutions), and some bigger corporates use it as a secondary CAD system – primarily for its ease of access, low hardware requirements and increasingly weighty toolset.

So when we heard that cloud-based CAM tools were finally about to be launched by Onshape, we were keen to find out more.

Onshape Cam Studio is included as part of its Professional and Enterprise subscriptions, loading immediately (with everything on the browser) and ready for use as part of a per-user, per-year licence that generally starts at around $2,500.

In a nutshell, CAM Studio offers 2.5- and 3-axis machining, integrated tool and machine libraries, realtime simulation and – an eye-catching feature for a CAM software at this price point – machine verification.

Advanced features like 4- and 5-axis machining, millturn operations and adaptive roughing will become available in a future CAM Studio Advanced offering –more of which later.

FIRST CUT

CAM functionality is straightforward: you start a new part or assembly in Onshape, create a new tab for CAM functionality and you’re straight into preparing a part for machining.

Once you’ve selected the part you want to machine, set-up is as simple as selecting the appropriate options from the columns. Pick the machine you want to run the process on from the list (which as a cloud software, will continue to update and grow); generate your set-up from a breadth of options that include orientation, stock definition and tool choice; and then pick a toolpath, before verifying it with toolpath simulation. There are lots of different controls that allow the toolpath to be shaped in different ways and directions, as well as other strategies.

A final level of testing based on full-blown machine simulation helps you to understand how the process is going to interact with your machine and not just the stock and the tool, identifying any collisions.

The next step consists of post-processing the universal code and translating it into G-code for your specific CNC machine. There’s also the option for rest machining, allowing the software to simulate all necessary passing operations with different tools – from rough through finishing – in order to speed up the simulation of the full process.

The workflow is undeniably slick. From Onshape CAD model to generated G-code heading to your 3-axis CNC machine, it’s a seamless process. As you’d expect, there are plenty of options to customise tool libraries, machines and post-processors to best represent the kit with which you’re working, making things go even more smoothly.

Onshape says that CAM Studio Advanced will target higher-end CNC jobs, for faster, more precise machining. It will do that by supporting 4 axis, 3+2 axis and 5 axis machining and, down the line, providing advanced mill and turning capabilities.

While there was no pricing for CAM Studio Advanced at the time of demo, the out-of-the-box capabilities here look on par with those of leading competitors.

What’s particularly impressive is the speed with which big assemblies, like computationally intensive 5-axis toolpaths, can be simulated in Onshape CAM Studio. We’re talking hundreds of thousands of lines of toolpath code, plus the whole machine simulated quickly and smoothly in seconds, with any collisions highlighted, ready for adjustments to be made.

The more complicated and detailed the features that the user is defining to be machined, the bigger the benefits a cloud-based system will provide over a locally installed system.

It also helps that on machine shop floors, nobody is running a super-expensive workstation, so being able to do all of this on a basic laptop via a web browser is an additional benefit and enables users to invest the savings they make in new CNC equipment.

Another big win for the cloud is clear when it comes to sharing manufacturing data. Knowing that you’re using the latest designs when collaborating with team members and external stakeholders has always been a big feature of Onshape’s CAD product.

With CAM Studio, this now runs straight through into manufacturing, allowing early identification and resolution of production issues. Designers can securely share models as simply as sharing a URL and swiftly revoke access should suppliers or orders change.

While nagging fears persist in some corners about the security of cloud-based software, never handing over a file

● 1 Native cloud processing speeds up CAM Studio’s toolpath simulation

2 Full machine simulation comes as standard in CAM Studio

3 CAM Studio Advanced will bring 4-axis, 3+2-axis and 5-axis machining

● 4 The more complex the part, the greater the benefits of cloud-based CAM, say Onshape executives ‘‘

In a nutshell, Onshape CAM Studio offers 2.5and 3-axis machining, integrated tool and machine libraries, mill-turn operations and machine verification

and controlling permissions at will means you have total control over your design.

Onshape executives are also keen to point out the company’s built-in PDM tools, which mean CAM strategies can be versioned alongside CAD models.

CONCLUSION

The launch of Onshape Cam Studio will no doubt spark debates around which native CAD-CAM set-up works most proficiently and provides the best value, given that most vendors now offer a cloud-enabled offering.

Autodesk Fusion has built a solid following, thanks to CAM tools that also benefi t from cloud-based acceleration for toolpath simulation. Solidworks 2025 has added Delmia Shop Floor Programmer with cloud connectivity, which includes 3-axis milling, wire EDM, and scalable upgrades for enhanced functionality. Siemens has shifted NX’s capabilities to the cloud, giving the ability to add on CAM tools to NX X when needed, SaaS-style.

It’s a crowded field, and that’s before you consider the offerings of the major CAM specialists, and what the likes of CloudNC, Toolpath and others are doing with AI to generate optimised toolpaths.

AI is something Onshape is already looking into on a broader level, way beyond just CAM. But being cloud-native, it should have several advantages over traditional software. Onshape executives have told us that the rate of updates and improvements to CAM Studio is set to be quick, in the same way as the company’s CAD product has evolved over the years, but don’t expect AI assistance too soon.

Overall, there’s something intriguing about what Onshape is doing with CAM Studio. Its initial successes are likely to be made with existing Professional or Enterprise users, by offering the tools directly through their familiar workspaces, enabling them to run CAM operations up to a certain point without needing third-party software.

Depending on where the price of the Advanced offering lands, it could also make Onshape a more attractive package to those running more serious CNC equipment. The speed of Onshape’s cloud processing power for complex work makes it worthy of a benchmarking exercise at least.

A big part of Onshape’s early appeal was its ability to help design teams collaborate. Today, it has expanded to connect engineers, manufacturers and external suppliers all in the same chain and all with the least amount of fuss. CAM adds another string to this bow, and on first glance, it has the potential to fly.

PRICING

Onshape CAM Studio is included with Professional ($2,500 per user, per year) and Enterprise licenses for Onshape. Onshape CAM Studio Advanced - $TBC www.onshape.com

With politicians worldwide backing AI, 2025

It seems like we’ve spent the early weeks of 2025 waiting for newly elected politicians to settle behind their desks and watching with bated breath for new policies that could prove make-or-break on both sides of the Atlantic Ocean.

With the Oval Office once more stinking of McDonalds and spray tan, the rest of the world, like an overly laboured passenger train, can finally start to pull away from the station, having only some semblance of an idea about the direction in which it’s heading. But one certainty to which we can all cling is that AI will continue to permeate discussions about seemingly every topic under the sun.

With huge sums of money being thrown at data centres and digital initiatives, and politicians around the world standing behind lecterns and extolling AI’s value in shaping the next industrial revolution, I’ve been thinking not about the future, but about the past. Those of you who know me better than simply an awkward headshot at the top of a page will also know that I despise the term ‘Industrial Revolution 4.0’ – or 5.0, or whatever number we’re supposedly up to now.

Many years ago, I studied the ‘OG Industrial Revolution’, the one in which children lost lives and limbs clambering around mines, chimneys and machinery; where entire villages upped and moved continents: and where a Spinning Jenny wasn’t just an adult entertainer bringing allegations against a global leader.

The Industrial Revolution was no overnight success, however quickly history classes flash over it. It was more than just keynotes from Watt, Stephenson, Brunel, Bessemer et al. It was almost two hundred years of emerging technologies constantly ramping up to full steam from all corners of industry.

This revolution shaped the world we live in today, creating new cities, sports, transport, laws, rights and much more that had never existed before.

In just the first weeks of 2025, the business of AI has taken off in a flurry of hype and opinion that makes even the early days of the internet seem underwhelming. The focus, after a period of economic stagnation and a slump in productivity, is on AI catalysing the next paradigm shift in human existence.

The UK government has declared that dedicated ‘AI Growth Zones’ will be created to speed up planning for AI infrastructure. Silicon Valley’s rush to support President Trump in his second term, meanwhile, suggests that the world’s biggest digital companies are positioning themselves for fewer restrictions around AI, leading to the next sprint of evolution.

You all likely know of Elon, Jensen, Jeff and Sam. But this will be the first truly international revolution. Tata’s Natarajan, Toyota’s Koji and Airbus’ Guillaume, as well as a few dozen Chinese CEOs, are just as likely to get a big say. This is not to discount the heads of exciting new brands and tools that have the potential to break through to the masses.

CHANGING CANVAS

Like many, I envisage AI impacting everyone’s lives across all facets of society. Just like the Industrial Revolution of stovepipe hats and street urchins, it will form part of a canvas of wider change.

One day, historians will look back upon this period as the wholesale evolution of digital design software, fabrication tools and computer hardware. It’s something that we’re all living through, which feels odd when I consider all the books and literature I’ve had to read regarding previous industrial booms.

For much of this period, I’ve been lucky enough to have had a front seat, covering technology for DEVELOP3D over the past 150 issues and more. I arrived just as 3D CAD was taking hold of the industry, just as designers were picking up new visualisation tools, and when rapid prototyping was still carried out by bureaux, not on desktops.

looks set to be a standout year in the technology’s development. It’s a revolution that will come with major implications for designers and engineers, writes Stephen Holmes
The tools used in product development are due an overhaul. Designers and engineers want to be more productive, to design better products, and to take humankind further than it’s been before

As my conversation with NASA’s Ryan McClelland reminded me [article on p28], designers and engineers are still only truly getting to grips with GPU-accelerated software, let alone AI. And, as McClelland points out, AI is going to change how designers and engineers interface with CAD. Less pointing and clicking, and more conversations. Suddenly the use of XR headsets becomes more practical and Siemens’ collaboration with Sony [p38] begins to show more promise.

Looking at the advances of Onshape in this issue [p40], it feels sobering to realise that what the industry still perceives as the new kid on the block was first developed over a decade ago.

The tools used in product development are due an overhaul. Designers and engineers want to be more productive, to design better products and to take humankind further than it’s been before. AI looks like the cornerstone technology to achieve this, and it’s up to us to pick it up and run with it.

So 2025 looks set to be a standout year, one that bored school children will have to memorise in the future as the year when AI finally took root in the tools of industry. It’s going to be an exciting time ahead.

GET IN TOUCH: So all-consuming has been his research into new AI tools for this issue, that Stephen may be suffering from AI-fatigue. Should you need him, he’ll be crafting stone axe heads in his shed and dreaming of simpler times. On Twitter, he’s @swearstoomuch

Workstation special report

Winter 2025

Model behaviour

What’s the best CPU, memory and GPU to process complex reality modelling data?

Intel vs AMD

The integrated GPU comes of age

From desktop to datacentre, could the AMD Ryzen AI Max Pro ‘Strix Halo’ processor change the face of workstations?

JAMES GRAY

Intel Core Ultra vs AMD Ryzen

9000 Series in CAD, BIM, reality modelling, viz and simulation

The AI enigma

Do you need an AI workstation?

+ how to choose a GPU for Stable Diffusion

The AI enigma

AI has quickly been woven into our daily workflows, leaving its mark on nearly every industry. For design, engineering, and architecture firms, the direction in which some software developers are heading raises important questions about future workstation investments, writes Greg Corke

You can’t go anywhere these days without getting a big AI smack in the face. From social media feeds to workplace tools, AI is infiltrating nearly every part of our lives, and it’s only going to increase. But what does this mean for design, engineering, and architecture firms? Specifically, how should they plan their workstation investments to prepare for an AI-driven future?

AI is already here

‘‘ Desktop software isn’t going away anytime soon, so firms could end up paying twice – once for the GPUs in their workstations and again for the GPUs in the cloud ’’

The first thing to point out is if you’re into visualisation — using tools like Enscape, Twinmotion, KeyShot, V-Ray, D5 Render or Solidworks Visualize, there’s a good chance your workstation is already AI-capable. Modern GPUs, such as Nvidia RTX and AMD Radeon Pro, are packed with special cores designed for AI tasks. Features such as AI denoising, DLSS (Deep Learning Super Sampling), and more are built into many visualisation tools. This means you’re probably already using AI whether you realise it or not. It’s not just these tools, however. For concept design, text-to-image AI software like Stable Diffusion can run locally on your workstation (see page WS30). Even in reality modelling apps, like Leica Cyclone 3DR, AI-powered features such as autoclassification are now included, requiring a Nvidia CUDA GPU (see page WS34) Don’t forget Neural Processing Units (NPUs) – new hardware accelerators designed specifically for AI tasks. These are mainly popping up in laptop processors, as they are energy-efficient so can help extend battery life. Right now, NPUs are mostly used for general AI tasks, such as to power AI assistants or to blur

backgrounds during Teams calls, but design software developers are starting to experiment too.

Cloud vs desktop

While AI is making its mark on the desktop, much of its future lies in the cloud. The cloud brings unlimited GPU processing power, which is perfect for handling the massive AI models that are on the horizon. The push for cloud-based development is already in full swing – just ask any software startup in AEC or product development how hard it is to get funded if their software doesn’t run in a browser.

Established players like Dassault Systèmes and Autodesk are also betting big on the cloud. For example, users of CAD software Solidworks can only access new AI features if their data is stored and processed on the Dassault Systèmes 3D Experience Platform. Meanwhile, Autodesk customers will need to upload their data to Autodesk Docs to fully unlock future AI functionality, though some AI inferencing could still be done locally.

While the cloud is essential for some AI workflows, not least because they involve terabytes of centralised data, not every AI calculation needs to be processed off premise. Software developers can choose where to push it. For example, when Graphisoft first launched AI Visualizer, based on Stable Diffusion, the AI processing was done locally on Nvidia GPUs. Given the software worked alongside Archicad, a desktop BIM tool, this made perfect sense. But Graphisoft then chose to shift processing entirely to the cloud, and users must now have a specific license of Archicad to use this feature.

The double-cost dilemma

Desktop software isn’t going away anytime soon. With tools like Revit and Solidworks installed in the millions – plus all the viz tools that work alongside them — workstations with powerful AI-capable GPUs will remain essential for many workflows for years to come. But here’s the issue: firms could end up paying twice — once for the GPUs in their workstations and again for the GPUs in the cloud. Ideally, software developers should give users some flexibility where possible. Adobe provides a great example of this with Photoshop, letting users choose whether to run certain AI features locally or in the cloud. It’s all about what works best for their setup — online or offline. Sure, an entry-level GPU might be slower, but that doesn’t mean you’re stuck with what’s in your machine. With technologies like HP Z Boost (see page WS32), local workstation resources can even be shared.

But the cloud vs desktop debate is not just about technology. There’s also the issue of intellectual property (IP). Some AEC firms we’ve spoken with won’t touch the cloud for generative AI because of concerns over how their confidential data might be used.

I get why software developers love the cloud — it simplifies everything on a single platform. They don’t have to support a matrix of processors from different vendors. But here’s the problem: that setup leaves perfectly capable AI processors sat idle on the desks of designers, engineers, and architects, when they could be doing the heavy lifting. Sure, only a few AI processes rely on the cloud now, but as capabilities expand, the escalating cost of those GPU hours will inevitably fall on users, either through pay-per-use charges or hidden within new subscription models. At a time when software license costs are already on the rise, adding extra fees to cover AWS or Microsoft Azure expenses would be a bitter pill for customers to swallow.

Cover story The integrated GPU comes of age

With the launch of the AMD Ryzen AI Max Pro ‘Strix Halo’ processor, AMD has changed the game for integrated GPUs, delivering graphics performance that should rival that of a mid-range discrete GPU. Greg Corke explores the story behind this brand-new chip and what it might mean for CAD,BIM, viz and more

For years, processors with integrated GPUs (iGPUs) — graphics processing units built into the same silicon as the CPU — have not been considered a serious option for 3D CAD, BIM, and especially visualisation — at least by this publication.

Such processors, predominantly manufactured by Intel, have generally offered just enough graphics performance to enable users to manipulate small 3D models smoothly within the viewport. However, until recently, Intel has not demonstrated anywhere near the same level of commitment to pro graphics driver optimisation and software certification as the established players – Nvidia and AMD.

This gap has limited the appeal of all-in-one-processors for demanding professional workflows, leaving the combination of discrete pro GPU (e.g. Nvidia Quadro / RTX and AMD Radeon Pro) and separate CPU (Intel Core) as the preferred choice of most architects, engineers and designers.

A seed for progress

Things started to change in 2023, when AMD introduced the ‘Zen 4’ AMD Ryzen Pro 7000 Series, a family of laptop processors with integrated Radeon GPUs capable of going toe to toe with entry-level discrete GPUs in 3D performance.

What’s more, AMD backed this up with the same pro graphics drivers that it uses for its discrete AMD Radeon Pro GPUs.

The chip family was introduced to the workstation sector by HP and Lenovo in compact, entry-level mobile workstations. In a market long dominated by Intel processors, securing two out of three major workstation OEMs was a major coup for AMD.

In 2024, both OEMs then adopted the slightly improved AMD Ryzen Pro 8000 Series processor and launched new 14-inch mobile workstations – the HP ZBook Firefly G11 A and Lenovo ThinkPad P14s Gen 5 –which we review on pages WS8 and WS9

Both laptops are an excellent choice for 3D CAD and BIM workflows and having tested them extensively, it’s fair to say we’ve been blown away by the

capabilities of the AMD technology.

The flagship AMD Ryzen 9 Pro 8945HS processor with integrated AMD Radeon 780M GPU boasts graphics performance that genuinely rivals that of an entry-level discrete GPU. For instance, in Solidworks 3D CAD software, it smoothly handles a complex 2,000-component motorcycle assembly in “shaded with edges” mode.

However, the AMD Ryzen Pro 8000 Series processor is not just about 3D performance. What truly makes the chip stand out is the ability of the iGPU to access significantly more memory than a typical entry-level discrete GPU. Thanks to AMD’s shared memory architecture — refined over years of developing integrated processors for Xbox and PlayStation gaming consoles — the GPU has direct and fast access to a large, unified pool of system memory.

Up to 16 GB of the processor’s maximum 64 GB can be reserved for the GPU in the BIOS. If memory is tight and you’d rather not allocate as much to the GPU, smaller profiles from 512 MB to 8 GB can be selected. Remarkably, if the GPU runs out of its ringfenced memory, it seamlessly borrows additional system memory if available, temporarily expanding its capacity. Since this memory resides in the same physical location, access remains fast, and real-time performance in 3D CAD and BIM tools typically only drops by a few frames per second, maintaining that all-important smooth experience within the viewport.

In contrast, when a discrete GPU runs out of memory, it can have a big impact on 3D performance. Frame rates can fall dramatically, often making it very hard to re-position a 3D model in the viewport. While a discrete GPU can also ‘borrow’ from system memory, it must access it over the PCIe bus, which is much slower.

All of this means the AMD Ryzen Pro 8000 Series processor can handle certain workflows that simply aren’t possible with an entry-level discrete GPU, especially one with only 4 GB of onboard VRAM.

To put this into a real-world workflow context: with our HP ZBook Firefly G11 A configured with 64 GB of system RAM, Solidworks Visualize was able to grab

the 20 GB of GPU memory it needed to render a complex scene at 8K resolution. What’s even more impressive is that while Solidworks Visualize rendered in the background, we could continue working on the 3D design in Solidworks CAD without disruption.

While the amount of addressable memory makes workflows like these possible, the AMD Radeon 780M GPU does not really have enough graphics horsepower to deliver sufficient frames rates in real-time viz software such as Twinmotion, Enscape, and D5 Render.

For that you need a more powerful GPU, which is exactly what AMD has delivered in its new AMD Ryzen AI Max Pro ‘Strix Halo’ processor, which it announced this month.

AMD Ryzen AI Max Pro

The AMD Ryzen AI Max Pro will be available first in HP Z workstations, but unlike the AMD Ryzen Pro 8000 Series processor it’s not just restricted to laptops. In addition to the HP ZBook Ultra G1a mobile, HP has launched a micro desktop, the HP Z2 Mini G1a (see box out on page WS6).

Although we haven’t had the chance to test these exciting new chips first hand, our experience with the AMD Ryzen Pro 8000 Series processor and the published specifications of the AMD Ryzen AI Max Pro series give us a very good idea of what to expect.

In the top tier model, the AMD Ryzen AI Max+ Pro 395, the integrated Radeon 8060S GPU is significantly more powerful than the Radeon 780M GPU in the Ryzen 9 Pro 8945HS processor.

It features 40 RDNA 3.5 graphics compute units — more than three times the 12 RDNA 3.0 compute units on offer in the 780M. This should make it capable of handling some relatively demanding workflows for real time visualisation.

But raw graphics performance only tells part of the story. The new Ryzen AI Max Pro platform can support up to 128 GB of 8,000MT/s LPDDR5X memory, and up to 96 GB of this can be allocated exclusively to the GPU. Typically, such vast quantities of GPU memory are only

available in extremely powerful and expensive cloud-based GPUs. It’s the equivalent to the VRAM in two high-end desktop-class workstation GPUs, such as the Nvidia RTX 6000 Ada Generation.

Reports suggest the Ryzen AI Max Pro will rival the graphics performance of an Nvidia RTX 4070 laptop GPU, the consumer equivalent of the Nvidia RTX 3000 Ada Gen workstation laptop GPU.

However, while the Nvidia GPU comes with 8 GB of fixed VRAM, the Radeon 8060S GPU can scale much higher. And this could give AMD an advantage when working with very large models, particularly in real time viewports, or when multitasking.

Of course, while the GPU can access what is, quite frankly, an astonishing amount of memory, there will still be practical limits to the size of visualisation models it can handle. With patience, while you could render massive scenes in the background, don’t expect seamless navigation of these models in the viewport, particularly at high resolutions. For that level of 3D performance, a high-end dedicated GPU will almost certainly still be necessary.

The competitive barriers

workstation special report

software Leica Cyclone 3DR, for example, AI classification is built around the Nvidia CUDA platform (see page WS34)

The good news is AMD is actively collaborating with ISVs to broaden support for AMD GPUs, porting code from Nvidia CUDA to AMD’s HIP framework, and some have already announced support. For example, CAD-focused rendering software, KeyShot Studio, now works with AMD Radeon for GPU rendering, as Henrik Wann Jensen, chief scientist, KeyShot, explains. “We are particularly excited about the substantial frame buffer available on the Ryzen AI Max Pro.” Meanwhile, Altair, a specialist in simulation-driven design, has also announced support for AMD Radeon GPUs on Altair Inspire, including the AMD Ryzen AI Max Pro.

Artificial Intelligence (AI)

These days, no new processor is complete without an AI story, and the AMD Ryzen AI Max Pro is no exception.

First off, the processor features an XDNA2-powered Neural Processing Unit (NPU), capable of dishing out 50 TOPS of AI performance, meeting Microsoft’s requirements for a CoPilot+ PC. This capability is particularly valuable for laptops, where it can accelerate simple AI tasks such as AutoFrame, Background Blur, and virtual backgrounds for video conferencing, more efficiently than a GPU, helping to extend battery life.

While 50 TOPS NPUs are not uncommon, it’s the amount of memory that the NPU and GPU can address that makes the AMD Ryzen AI Max Pro particularly interesting for AI.

‘‘ AMD is pushing the message that users no longer need to rely on a separate CPU and GPU. Could this mark the beginning of a decline in entrylevel to mid-range professional discrete GPUs? ’’

The AMD Ryzen AI Max Pro looks to bring impressive new capabilities, but it doesn’t come without its challenges. In general, AMD GPUs lag behind Nvidia’s when ray tracing, a rendering technique which is becoming increasingly popular in real time arch viz tools.

Additionally, some AEC-focused independent software vendors (ISVs) depend on Nvidia GPUs to accelerate specific features. In reality modelling

AMD isn’t just playing catchup with Nvidia; it’s also paving the way for innovations in software development. According to Rob Jamieson, senior industry alliance manager at AMD, traditional GPU computation often requires duplicating data — one copy in system memory and another in GPU memory — that must stay in sync. AMD’s shared memory architecture changes the game by enabling a ‘zero copy’ approach, where the CPU and GPU can read from and write to a single data source. This approach not only has the potential to boost performance by not having to continually copy data back and forth, but also reduce overall memory footprint, he says.

HP Z2 Mini G1a desktop workstation

HP is billing the HP Z2 Mini G1a with AMD Ryzen AI Max Pro processor as the world’s most powerful mini workstation, claiming that it can tackle the same workflows that previously required a much larger desktop workstation. On paper, much of this claim appears to be down to the amount of memory the GPU can address as HP’s Intelbased equivalent, the HP Z2 Mini G9, is limited to low profile GPUs, up to the 20 GB Nvidia RTX 4000 SFF Ada.

The HP Z2 Mini G1a also supports slightly more system

memory than the Intel-based HP Z2 Mini G9 (128 GB vs 96 GB), although some of that memory will need to be allocated to the GPU. System memory in the HP Z2 Mini G1a is also significantly faster (8,000 MT/s vs 5,600 MT/s), which will benefit certain memory intensive workflows in areas including simulation and reality modelling.

While the HP Z2 Mini G9 can support CPUs with a similar number of cores — up to the Intel Core i9-13900K (8 P-cores and 16 E-cores) — our past tests have shown that multi-core frequencies drop considerably under heavy

sustained loads. It will be interesting to see if the energyefficient AMD Ryzen AI Max Pro processor can maintain higher clock speeds across its 16-cores.

Perhaps the most compelling use case of the HP Z2 Mini G1a will be when multiple units are deployed in a rack, as a centralised remote workstation resource.

With the HP Z2 Mini G9, both the power supply and the HP Anyware Remote System Controller, which provides

According to AMD, having access to large amounts of memory allows the processor to handle ‘incredibly large, highprecision AI workloads’, referencing the ability to run a 70-billion parameter large language model (LLM) 2.2 times faster than a 24 GB Nvidia GeForce RTX 4090 GPU.

While edge cases like these show great promise, software compatibility will be a key factor in determining the success of the chip for AI workflows. One can’t deny that Nvidia currently holds a commanding lead in AI software development.

On a more practical level for architects and designers, the chip’s ability to handle large amounts of memory could offer an interesting proposition for AI-driven tools like Stable Diffusion, a text-to-image generator that can be used for ideation at the early stages of design (see page WS30)

remote ‘lights out’ management capabilities, were external. With the new HP Z2 Mini G1a the PSU is now fully integrated in the slightly smaller chassis, which should help increase density and airflow. Five HP Z2 Mini G1a workstations can be placed side by side in a 4U space.

Beyond the GPU

While it’s natural to be drawn to the GPU — being far more powerful than any iGPU that has come before — the AMD Ryzen AI Max Pro doesn’t exactly hold back when it comes to the CPU. Compared to the AMD Ryzen Pro 8000 Series processor, the core count is doubled, boasting up to 16 ‘Zen 5’ cores. This means it should significantly outperform the eight ‘Zen 4’ cores of its predecessor in multi-threaded workflows like rendering.

On top of that, the AMD Ryzen AI Max Pro platform supports much faster memory — 8,000MT/s LPDDR5X compared to DDR5-5600 on the AMD Ryzen Pro 8000 Series — so memory-intensive workflows like simulation and reality modelling should get an additional boost.

Laptop, desktop and datacentre

One of the most interesting aspects of the AMD Ryzen AI Max Pro is that it is being deployed in laptops and micro desktops. This also extends to datacentres as well, as the HP Z2 Mini G1a desktop is designed from the ground up to be rackable.

While the HP Z2 Mini G1a and HP ZBook Ultra G1a use the exact same silicon, which features a configurable Thermal Design Power (cTDP) of 45W – 120W, performance could vary significantly between the two devices. This is down to the amount of power that each workstation can draw.

The power supply in the HP Z2 Mini G1a desktop is rated at 300W—more than twice the 140W of the HP ZBook Ultra G1a laptop. While users shouldn’t notice any difference in single threaded or lightly threaded workflows like CAD or BIM, we expect performance in multi-threaded tasks, and possibly graphics-intensive tasks, to be superior on the desktop unit.

However, that still doesn’t mean the HP Z2 Mini G1a will get the absolute best

workstation special report

out of the processor. It remains to be seen what clock speeds the AMD Ryzen AI Max Pro processor will be able to maintain across its 16-cores, especially in highly multi-threaded workflows like rendering.

Conclusion

The AMD Ryzen AI Max Pro processor has the potential to make a significant impact in the workstation sector. On the desktop, AMD has already disrupted the high-end workstation space with its Threadripper Pro processors, severely impacting sales of Intel Xeon. Now, the company aims to bring this success to mobile and micro desktop workstations, with the promise of significantly improved graphics with buckets of addressable memory.

AMD is pushing the message that users no longer need to rely on a separate CPU and GPU. However, overcoming the long-standing perception that iGPUs are not great for 3D modelling is no small challenge, leaving AMD with significant work to do in educating the market. If AMD succeeds, could this mark the beginning of a decline in entry-level to mid-range professional discrete GPUs?

Much will also depend on cost. Neither AMD nor HP has announced pricing yet, but it stands to reason that a single chip solution should be more cost-effective than having two separate components.

Meanwhile, while the new chip promises impressive performance in all the right areas, that’s only one part of the equation. In the workstation sector, AMD’s greater challenge arguably lies in software. To compete effectively, the company needs to collaborate more closely with select ISVs to enhance compatibility and reduce reliance on Nvidia CUDA. Additionally, optimising its graphics drivers for better performance in certain professional 3D applications remains a critical area for improvement.

HP ZBook Ultra G1a mobile workstation

HP is touting the HP ZBook

Ultra G1a with AMD Ryzen

AI Max Pro processor as the world’s most powerful 14inch mobile workstation. It offers noteworthy upgrades over other 14-inch models, including double the number of CPU cores, double the system memory, and substantially improved graphics.

When compared to the considerably larger and heavier 16-inch HP ZBook Power

G11 A—equipped with an AMD Ryzen 9 8945HS processor and Nvidia RTX 3000 Ada laptop GPU—HP claims the HP ZBook

Ultra G1a with an AMD Ryzen AI Max Pro 395 processor and Radeon 8060S GPU, delivers significant performance gains. These include 114% faster CPU rendering in Solidworks and 26% faster graphics performance in Autodesk 3ds Max. The HP ZBook Ultra G1a isn’t just about performance. HP claims it’s the thinnest ZBook ever, just 18.5mm thick and weighing as little as 1.50kg. The HP Vaporforce thermal system incorporates a vapour chamber with large dual turbo fans, expanded rear ventilation, and a newly designed hinge that

The competition

AMD is not the only company developing processors with integrated GPUs. Intel has made big strides in recent years, and the knowledge it has gained in graphics hardware and pro graphics drivers from its discrete Intel Arc Pro GPUs is now starting to trickle through to its Intel Core Ultra laptop processors. Elsewhere, Qualcomm’s Snapdragon chips with Armbased CPU cores, have earned praise for their enviable blend of performance and power efficiency. However, there is no indication that any of the major OEMs are considering this chip for workstations and while x86 Windows apps are able to run on Arm-based Windows, ISVs would need to make their apps Arm-native to get the best performance.

Nvidia is also rumoured to be developing an Armbased PC chip, but would face similar challenges to Qualcomm on the software front.

Furthermore, while the Ryzen AI Max Pro is expected to deliver impressive 3D performance in CAD, BIM, and mainstream real-time viz workflows, its ray tracing capabilities may not be as remarkable. And for architecture and product design, ray tracing is arguably more important than it is for games.

Ultimately, the success of the AMD Ryzen AI Max Pro will depend on securing support from the other major workstation OEMs. So far, there’s been no official word from Lenovo or Dell, though Lenovo continues to offer the AMD Ryzen Pro 8000-based ThinkPad P14s Gen 5 (AMD), which is perfect for CAD, and Dell has announced plans to launch AMD-based mobile workstations later this year. AMD seems prepared to play the long game, much like it did with Threadripper Pro, laying the groundwork for future generations of processors with even more powerful integrated GPUs. We look forward to putting the AMD Ryzen AI Max Pro through its paces soon.

improves airflow. According to HP, this design boosts performance while keeping surface temperatures cooler and fan noise quieter.

HP is expecting up to 14 hours of battery life from the HP XLLong Life 4-cell, 74.5 Wh polymer battery. The device is paired with either a 100 W or 140 W USB Type-C slim adapter for charging. For video conferencing, the laptop features a 5 MP IR camera with Poly Camera Pro software. Advanced features like AutoFrame, Spotlight, Background Blur, and virtual backgrounds are all powered

by the 50 TOPS NPU, optimising power efficiency.

Additional highlights include a range of display options, with the top-tier configuration offering a 2,880 x 1,800 OLED panel (400 nits brightness, 100% DCI-P3 colour gamut), HP Onlooker detection that automatically blurs the screen if it detects that someone is peeking over your shoulder, up to 4 TB of NVMe TLC SSD storage, and support for Wi-Fi 7.

Review: HP ZBook

Firefly 14 G11 A

This pro laptop is a great all rounder for CAD and BIM, offering an enviable blend of power and portability in a solid, wellbuilt 14-inch chassis, writes Greg Corke

Afew years back, HP decided to simplify its ZBook mobile workstation lineup. With so many different models, and inconsistent product names, it was hard to work out what was what.

HP’s response was to streamline its offerings into four primary product lines: the HP ZBook Firefly (entry-level), ZBook Power (mid-range), ZBook Studio (slimline mid-range), and ZBook Fury (high-end). HP has just added a fifth—the ZBook Ultra—powered by the new AMD Ryzen AI Max Pro processor.

The ZBook Firefly is the starter option, intended for 2D and light 3D workflows, with stripped back specs. Available in both 14-inch and 16-inch variants, customers can choose between Intel or AMD processors. While the Intel Core Ultra-

based ZBook Firefly G11 is typically paired with an Nvidia RTX A500 Laptop GPU, the ZBook Firefly G11 A — featured in this review — comes with an AMD Ryzen 8000 Series ‘Zen 4’ processor with integrated Radeon graphics.

Weighing just 1.41 kg, and with a slim aluminium chassis, the 14-inch ZBook Firefly G11 A is perfect for CAD and BIM onthe-go. But don’t be fooled by its sleek design — this pro laptop is built to perform.

Product spec

■ AMD Ryzen 9 Pro 8945HS processor (4.0 GHz base, 5.2 GHz max boost) (8-cores) with integrated AMD Radeon 780M GPU

■ 64 GB (2 x 32 GB) DDR5-5600 memory

■ 1 TB, PCIe 4.0 M.2 TLC SSD

■ 14-inch WQXGA (2,560 x 1,600), 120 Hz, IPS, antiglare, 500 nits, 100% DCI-P3, HP DreamColor display

Powered by the flagship AMD Ryzen 9 Pro 8945HS processor, our review unit handled CAD and BIM workflows like a champ, even when working with some relatively large 3D models. The integrated AMD Radeon 780M graphics delivered a smooth viewport in Revit and Solidworks, except with our largest assemblies, but showed its limitations in real-time viz. In Twinmotion, with the mid-sized Snowden Tower Sample project, we recorded a mere 8 FPS at 2,560 x 1,600 resolution. While you wouldn’t ideally want to work like this day in day out, it’s passable if you just want to set up some scenes to render, which it does pretty quickly thanks to its scalable GPU memory (see box out below).

■ 316 x 224 x 19.9 mm (w/d/h)

■ From 1.41 kg

■ Microsoft Windows 11 Pro

■ 1 year (1/1/0) limited warranty includes 1 year of parts and labour. No on-site repair.

■ £1,359 (Ex VAT) CODE: 8T0X5EA#ABU

■ www.hp.com/z

On the CPU side, the frequency in single threaded workflows peaked at 4.84 GHz. In our Revit and Solidworks benchmarks, performance was only between 25% to 53% slower than the current fastest desktop processor, the AMD Ryzen 9 9950X, with the newer ‘Zen 5’ cores. Things were equally impressive in multi-threaded workflows. When rendering in V-Ray, for example,

it delivered 4.1 GHz across its 8 cores, 0.1 GHz above the processor’s base frequency. Amazingly, it maintained this for hours, with minimal fan noise. With a compact 65W USB-C power supply, the laptop is relatively low-power.

The HP DreamColor

WQXGA (2,560 x 1,600) 16:10 120Hz IPS display with 500 nits of brightness is a solid option. It delivers super-sharp detail for precise CAD work and good colours for visualisation. There are several alternatives, including a WUXGA (1,920 x 1,200) anti-glare IPS panel, with 100% sRGB coverage and a remarkable 1,000 nits, but no OLED options, as you’ll find in other HP ZBooks and the Lenovo ThinkPad P14s (AMD) . Under the hood, the laptop came with a 1 TB NVMe SSD and 64 GB of DDR5-5600 memory, the maximum capacity of the machine. This is possibly a tiny bit high for mainstream CAD and BIM workflows, but bear in mind some of it needs to be allocated to graphics. Other features include fast Wi-Fi 6E, and an optional 5MP camera with privacy shutter and HP Auto Frame technology that helps keep you in focus during video calls.

There’s much to like about the HP ZBook Firefly G11 A. It’s very cost-effective, especially as it’s currently on offer at £1,359 with 1-year warranty, but there’s nothing cheap about this excellent mobile workstation. It’s extremely well-built, quiet in operation and offers an enviable blend of power and portability. All of this makes it a top pick for users CAD and BIM software, with a sprinkling of viz on the top.

What does the AMD Radeon 780M GPU offer for 3D design?

Integrated graphics no longer means designers must compromise on performance. As detailed in our cover story, “The integrated GPU comes of age” (see page WS4), the AMD Ryzen 8000 Series processor impresses. It gives the HP ZBook Firefly 14 G11 A and Lenovo ThinkPad P14s Gen 5 mobile workstations enough graphics horsepower for entry-level CAD and BIM workflows, while also allowing designers, engineers and architects to dip their toes into visualisation. Take a complex motorcycle

assembly in Solidworks CAD software, for example — over 2,000 components, modelled at an engineering level of detail. With the AMD Ryzen 9 Pro 8945HS processor with AMD Radeon 780M integrated graphics our CAD viewport was perfectly smooth in shaded with edges display mode, hitting 31 Frames Per Second (FPS) at FHD resolution and 27 FPS at 4K. Enabling RealView, for realistic materials, shadows, and lighting, dialled back the realtime performance a little, with frame rates dropping to 14–16 FPS. Even though that’s below

the golden 24 FPS, it was still manageable, and repositioning the model felt accurate, with no frustrating overshooting.

The processor’s trump card is the ability of the built in GPU to address lots of memory. Unlike comparative discrete GPUs, which are fixed with 4 GB or 8 GB, the integrated AMD Radeon GPU can be assigned a lot more, taking a portion of system memory. In the BIOS of the HP ZBook Firefly 14 G11 A, one can choose between 512 MB, 8 GB or 16 GB, so long as the laptop has system memory to spare, taken

Review: Lenovo ThinkPad P14s (AMD)

This 14-inch mobile workstation stands out for its exceptional serviceability featuring several customer-replaceable components, writes Greg Corke

The ThinkPad P14s Gen 5 (AMD) is the thinnest and lightest mobile workstation from Lenovo — 17.71mm thick and starting at 1.31kg. It’s a true 14-incher, smaller than the ThinkPad P14s Gen 5 (Intel), which has a slightly larger 14.5-inch display.

The chassis is quintessential ThinkPad — highly durable, with sturdy hinges and an understated off-black matte finish. The keyboard feels solid, complemented by a multi-touch TrackPad with a pleasingly smooth Mylar surface. True to tradition, it also comes with the ThinkPad-standard TrackPoint with its three-button

from its maximum of 64 GB. 8 GB is sufficient for most CAD workflows, but the 16 GB profile can benefit design visualisation as it allows users to render more complex scenes at higher resolutions than typical entrylevel discrete GPUs. This was demonstrated perfectly in arch viz software Twinmotion from Epic Games. With the mid-sized Snowden Tower Sample project, the AMD Radeon 780M integrated graphics in our HP ZBook Firefly G11 A took 437 secs to render out six 4K images, using up to 21 GB of GPU memory in the

setup. We’ve yet to meet anyone who actually uses this legacy pointing device, but removing it would likely spark outrage among die-hard fans. Meanwhile, the fingerprint reader is seamlessly integrated into the power button for added convenience.

The laptop stands out for its impressive serviceability, allowing the entire device to be disassembled and reassembled using basic tools — just a Phillips head screwdriver is needed to remove back panel.

Product spec

■ AMD Ryzen 7 Pro 8840HS processor (3.3 GHz base, 5.1 GHz max boost) (6-cores) with integrated AMD Radeon 760M GPU

■ 32 GB (2 x 16 GB) DDR5-5600 memory

■ 512 GB, PCIe 4.0 M.2 SSD

■ 14-inch WUXGA (1,920 x 1,200) IPS display with 400 nits

■ 316 x 224 x 17.7 mm (w/d/h)

■ From 1.31 kg

■ Microsoft Windows 11 Pro

less powerful integrated GPU compared to the flagship 45W AMD Ryzen 9 Pro 8945HS.

The machine performed well in Solidworks (CAD) and Revit (BIM), but unsurprisingly came in second to the HP ZBook Firefly in all our benchmarks. The margins were small, but became more noticeable in multi-threaded workflows, especially rendering. On the plus side, the P14s was slightly quieter under full load.

It offers a range of customerreplaceable components, including the battery (39.3Wh or 52.5Wh options), M.2 SSD, and memory DIMMs, which thankfully aren’t soldered onto the motherboard. Beyond that, you can swap out the keyboard, trackpad, speakers, display, webcam, fan/heatsink assembly, and more.

■ 3 Year Premier Support

■ £1,209 (Ex VAT)

■ www.lenovo.com

Our review unit’s 14-inch WUXGA (1,920 x 1,200) IPS display is a solid, if not stand out option, offering 400 nits of brightness. One alternative is a colour-calibrated 2.8K (2,880 x 1,800) OLED screen — also 400 nits, but with 100% DCI-P3 and 120Hz refresh.

The keyboard deserves a special mention for its top-loading design, eliminating the need to dismantle the laptop from below. Simply remove two clearly labelled screws from the bottom panel, and the keyboard pops off from the top.

The 5.0 MP webcam with IR and privacy shutter is housed in a slight protrusion at the top of the display. While this design was necessary to accommodate the higher-resolution camera (an upgrade from the Gen 4), it also doubles as a convenient handle when opening the lid.

There’s a choice of two AMD Ryzen 8000 Series processors: the Ryzen 5 Pro 8640HS (6 cores) and the Ryzen 7 Pro 8840HS (8 cores). Both have a Thermal Design Power (TDP) of 28W. Lenovo has chosen not to support the more powerful 45W models, likely due to thermal and power considerations. 45W models are available in the HP ZBook Firefly G11 A. Our review unit came with the entry-level Ryzen 5 Pro 8640HS. While capable, it has slightly lower clock speeds, two fewer cores, and a

Additional highlights include up to 96 GB of DDR5-5600 memory, Wi-Fi 6E, a hinged ‘drop jaw’ Gigabit Ethernet port, 2 x USB-A and 2 x USB-C. It comes with a compact 65 W USB-C power supply.

process (16 GB of dedicated and 5 GB of shared). In contrast, discrete desktop GPUs with only 8 GB of memory, took significantly longer. It seems the Nvidia RTX A1000 (799 secs) and AMD Radeon W7600 (688 secs) both pay a big penalty when they run out of their fixed on-board supply and have to borrow more from system memory over the PCIe bus, which is much slower. Of course, all eyes are on AMD’s new Ryzen AI Max Pro processor. It features significantly improved graphics, and a choice of 6, 8, 12 or 16 ‘Zen 5’ CPU cores — up to twice

as many as the 8 ‘Zen 4’ cores in the AMD Ryzen 8000 Series. However, AMD’s new silicon star in waiting won’t be available until Spring 2025, which is when HP plans to ship the ZBook Ultra G1a mobile workstation. Pricing also remains under wraps.

As we wait to see how AMD’s new chips sit in the market, the HP ZBook Firefly 14 G11 A and Lenovo ThinkPad P14s Gen 5 continue to shine as excellent options for a variety of CAD and BIM workflows — offering impressive performance at very appealing price points.

Overall, the ThinkPad P14s Gen 5 stands out as a reliable performer for CAD and BIM, offering an impressive blend of serviceability and thoughtful design.

In an era where manufacturers often prioritise ‘thinner and lighter’ over repairability, it’s great to see Lenovo bucking this trend, a move that is sure to resonate with right-to-repair advocates.

AMD Ryzen 9000 vs Intel Core Ultra 200S for

CAD, BIM, rendering, simulation, and reality modelling

AMD is dominating the high-end workstation market with Threadripper Pro. But how does it fare in the mainstream segment, a traditional stronghold for Intel? Greg Corke pits the AMD Ryzen 9000 Series against the Intel Core Ultra 200S to find out

After years of playing second fiddle, AMD is now giving Intel a serious run for its money. In high-end workstations, AMD Ryzen Threadripper Pro dominates Intel Xeon in most real-world benchmarks. The immensely powerful multi-core processor now plays a starring role in the portfolios of all the major workstation OEMs.

But what about the mainstream workstation market? Here, Intel has managed to maintain its dominance with Intel Core. Despite facing stiff competition from the past few generations of AMD Ryzen processors, none of HP, Dell nor Lenovo have backed AMD’s volume desktop chip with any real conviction.

That’s not the case with specialist workstation manufacturers, however. For some time now, AMD Ryzen has featured strongly in the portfolios of Boxx, Scan, Armari, Puget Systems and others.

But the silicon sector moves fast. Intel and AMD recently launched new mainstream processors — the AMD Ryzen 9000 Series and Intel Core Ultra 200S Series. Both chip families are widely available from specialist workstation manufacturers, which are much more agile when it comes to introducing new tech. We’ve yet to see any AMD Ryzen 9000 or Intel Core Ultra 200S Series

workstations from the major OEMs. However, that’s to be expected as their preferred enterprise-focused variants — AMD Ryzen Pro and Intel Core vPro — have not launched yet.

AMD Ryzen 9000 Series “Zen 5”

The AMD Ryzen 9000 Series desktop processors, built on AMD’s ‘Zen 5’ architecture, launched in the second half of 2024 with 6 to 16 cores. AMD continues to use a chiplet-based design, where multiple CCDs (Core Complex Dies) are connected together to form a single, larger processor. The 6 and 8-core models are made from a single CCD, while the 12 and 16-core models comprise two CCDs.

The new Ryzen processors continue to support simultaneous multi-threading (SMT), AMD’s equivalent to Intel HyperThreading, which enables a single physical core to execute multiple threads simultaneously. This can help boost performance in certain multi-threaded workflows, such as ray trace rendering, but it can also slow things down. DDR5 memory is standard, up to a maximum of 192 GB. However, the effective data rate (speed) of the memory, expressed in mega transfers per second (MT/s), can vary dramatically depending on the amount of memory installed in your workstation. For example, you can

currently get up to 96 GB at 5,600 MT/s, but if you configure the workstation with 128 GB, the speed will drop to 3,600 MT/s. Some motherboards can support even faster 8,000 MT/s memory, though this is currently limited to 48 GB.

All Ryzen 9000 Series processors come with integrated GPUs, but their performance is limited, making an add-in GPU essential for professional 3D work. They do not include an integrated neural processing unit (NPU) for AI tasks.

The Ryzen 9000 Series features two classes of processors: the standard Ryzen models, denoted by an X suffix and the Ryzen X3D variants which feature AMD 3D V-Cache technology.

There are four standard Ryzen 9000 Series models. The top-end AMD Ryzen 9 9950X has 16-cores, 32 threads, and a max boost frequency of 5.7 GHz.

The other processors have slightly lower clock speeds and fewer cores but are considerably cheaper. The AMD Ryzen 5 9600X, for example, has six cores and boosts to 5.4 GHz, but is less than half the price of the Ryzen 9 9950X. The full line up can be seen in the table right.

The Ryzen X3D lineup features significantly larger L3 caches than standard Ryzen processors. This increased cache size gives the CPU fast access to more data, instead of having to

fetch the data from slower system memory (RAM). The flagship 16-core AMD Ryzen 9 9950X3D features 128 MB of cache, but the 3D V-Cache is limited to one of its two CCDs.

All the new ‘Zen 5’ Ryzen 9000 chips are more power efficient than the previous generation ‘Zen 4’ Ryzen 7000 Series. This has allowed AMD to reduce the Thermal Design Power (TDP) on a few of the standard Ryzen models. The top-end 16-core processors — the Ryzen 9 9950X and Ryzen 9 9950X3D — both have a TDP of 170W and a peak power of 230W. All the others are rated at 65W or 120W.

Intel Core Ultra 200S “Arrow Lake” Intel Core Ultra marks a departure from Intel’s traditional generational numbering system (e.g., 14th Gen).

But the Intel Core Ultra 200S (codenamed Arrow Lake) is not just an exercise in branding. It marks a major change in the design of its desktop processors, moving to a tiled architecture (Intel’s term for chiplets).

Like 14th Gen Intel Core, the Intel Core Ultra 200S features two different types of cores: Performance-cores (P-cores) for primary tasks and slower Efficient-cores (E-cores) for background processing.

In a bold move, Intel has dropped Hyper-Threading from the design, a feature that was previously supported on the P-cores in 14th Gen Intel Core.

Like AMD, DDR5 memory is standard, with a maximum capacity of 192 GB. However, the data rate doesn’t vary as much depending on the amount installed. For instance, with 64 GB, the speed reaches 5,600 MT/s, while with 128 GB, it only drops slightly to 4,800 MT/s.

The integrated GPU has been improved, but most 3D workflows will still require an add-in GPU. For AI tasks, there’s an integrated NPU, but at 13 TOPS it’s not powerful enough to meet Microsoft’s requirements for Windows Copilot+.

The processor family includes three main models. At the high end, the Intel Core Ultra 9 285K features 8 P-cores and 16 E-cores. The P-cores operate at a base frequency of 3.7 GHz, with a maximum Turbo of 5.7 GHz. It has a base power of 125 W and draws 250 W at peak.

At the entry level, the Intel Core Ultra 5 245K offers 6 P-cores and 8 E-cores, with a base frequency of 4.2 GHz and a max Turbo of 5.2 GHz. It has a base power of 125 W, rising to 159 W under Turbo. The full lineup is detailed on the previous page.

Test setup

For our testing, we focused on the flagship models from each standard processor

family: the AMD Ryzen 9 9950X (16 cores, 32 threads) and the Intel Core Ultra 9 285K (8 P-cores, 16 E-cores). We also included the AMD Ryzen 7 9800X3D (8 cores, 16 threads) which, at the time, was the most powerful Ryzen 9000 Series chip with 3D V-Cache. At CES a few weeks ago, AMD announced the 12-core Ryzen 9 9900X3D and the 16-core Ryzen 9 9950X3D but these 3D V-Cache processors were not available for testing.

The AMD Ryzen 9 9950X and Intel Core Ultra 9 285K were housed in very similar workstations — both from specialist UK manufacturer, Scan. Apart from the CPUs and motherboards, the other specifications were almost identical.

The AMD Ryzen 7 9800X3D workstation came from Armari. All machines featured different GPUs, but our tests focused on CPU processing, so this shouldn’t impact performance. The full specs can be seen below. Testing was done on Windows 11 Pro 26100 with power plan set to high-performance.

AMD Ryzen 9 9950X

Scan 3XS GWP-A1-R32 workstation

See review on page WS16

• Motherboard: Asus Pro Art B650 Creator

• Memory: 64 GB (2 x 32 GB) Corsair DDR5 (5,600 MT/s)

• GPU: Nvidia RTX 4500 Ada Gen

• Storage: 2TB Corsair MP700 Pro SSD

• Cooling: Corsair Nautilus 360 cooler

• PSU: Corsair RM750e PSU

Intel Core Ultra 9 285K

Scan 3XS GWP-A1-C24 workstation

See review on page WS16

• Motherboard: Asus Prime Z890-P

• Memory: 64 GB (2 x 32 GB) Corsair DDR5 (5,600 MT/s)

• GPU: Nvidia RTX 2000 Ada Gen

• Storage: 2TB Corsair MP700 Pro SSD

• Cooling: Corsair Nautilus 360 cooler

• PSU: Corsair RM750e PSU

AMD Ryzen 7 9800X3D

Armari Magnetar MM16R9 workstation

See review on page WS20

• Motherboard: ASUS ROG Strix AMD B650E-I Gaming WiFi Mini-ITX

• Memory: 96 GB (2 x 48 GB) Corsair Vengeance DDR5-6000C30 EXPO (5,600 MT/s)

• GPU: AMD Radeon Pro W7500

• Storage: 2TB Samsung 990 Pro SSD

• Cooling: Armari SPX-A6815NGR 280mm AIO+NF-P14 redex

• PSU: Thermaltake Toughpower SFX 850W ATX3.0 Gen5

On test

We tested all three workstations with a range of real-world applications used in AEC and product development. Where data existed, and was relevant, we also compared performance figures from older generation processors. This included mainstream models (12th, 13th and 14th Gen Intel Core, AMD Ryzen 7000) and high-end workstation processors (AMD Ryzen 7000 Threadripper and Threadripper Pro, Intel Xeon W-3400, and 4th Gen Intel Xeon Scalable).

Data for AMD Threadripper came from standard and overclocked workstations. In the benchmark charts, 90°C refers to the max temp set in the Armari Magnetar M64T7 ‘Level 1’ PBO (see Workstation Special Report 2024 - tinyurl.com/WSR24), while 900W refers to power draw of the processor in the Comino Grando workstation (see page WS22)

The comparisons aren’t entirely applesto-apples — older machines were tested with different versions of Windows 11, as well as varying memory, storage, and cooling configurations. However, the results should still provide a solid approximation of relative performance.

CAD and BIM

Dassault Systèmes Solidworks (CAD) and Autodesk Revit (BIM) are bread and butter tools for designers, engineers, and architects. For the most part, these applications are single-threaded, although some processes are able to utilise a few CPU cores. Ray-trace rendering stands out as the exception, taking full advantage of all available cores.

In the Autodesk Revit 2025 RFO v3 benchmark the AMD Ryzen 9 9950X came out top in the model creation and export tests, in which Intel has traditionally held an edge. The AMD Ryzen 7 9800X3D performed respectably, but with its slightly lower maximum frequency, lagged behind a little.

In Solidworks 2022, things were much more even. In the rebuild, convert, and simulate subtests of the SPECapc benchmark, there was little difference between the AMD Ryzen 9 9950X and the Intel Core Ultra 9 285K. However, in the mass properties and boolean subtests, the Ryzen 9 9950X pulled ahead, only to be outshined by the Ryzen 7 9800X3D. Despite the 9800X3D having a lower clock speed, it looks like the additional cache provides a significant performance boost.

But how do the new chips compare to older generation processors? Our data shows that while there are improvements, the performance gains are not huge.

AMD’s performance increases ranged

‘‘
AMD’s cache-rich Ryzen 9000 X3D variants look particularly appealing for select workflows where having superfast access to a large pool of frequently used data makes them shine ’’

from 7% to 22% generation-on-generation, although the Ryzen 9 9950X was 9% slower in the mass properties test. Intel’s improvements were more modest, with a maximum gain of just 9%. In fact, in three tests, the Intel Core Ultra 9 285K was up to 19% slower than its predecessor.

Looking back over the last three years, Intel’s progress appears incremental. Compared to the Intel Core i9-12900K, launched in late 2021, the Intel Core Ultra 9 285K is only up to 26% faster.

Ray trace rendering Ray trace rendering is exceptionally multithreaded, so can take full advantage of all CPU cores. Unsurprisingly, the processors with the highest core counts — the AMD Ryzen 9 9950X (16 cores) and Intel Core Ultra 9 285K (24 cores) — topped our tests.

The Ryzen 9 9950X outperformed the Intel Core Ultra 9 285K in several benchmarks, delivering faster performance in V-Ray (17%), CoronaRender (15%), and KeyShot (11%). Intel’s decision to drop Hyper-Threading may have contributed to this performance gap, though Intel still claimed a slight lead in Cinebench, with a 5% advantage.

Gen-on-gen improvements were modest. Intel showed gains of 4% to 17%, while AMD delivered between 5% and 11% faster performance.

We also ran stress tests to assess sustained performance. In several hours of rendering in V-Ray, the Ryzen 9 9950X held steady at 4.91 GHz, while the Ryzen 9 9800X3D maintained 5.17 GHz. Meanwhile, the P-cores of the Intel Core Ultra 9 285K reached 4.86 GHz.

Power consumption is another important consideration. The Ryzen 9 9950X drew 200W, whereas the Intel Core Ultra 9 285K peaked at 240W — slightly lower than its predecessor, 14th Gen Intel Core.

Since rendering scales exceptionally well with higher core counts, the best performance is achieved with high-end workstation processors like AMD Ryzen Threadripper Pro.

Simulation (FEA and CFD)

Engineering simulation encompasses Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD),

both of which are extremely demanding computationally.

FEA and CFD utilise a variety of solvers, each with unique behaviours, and performance can vary depending on the dataset. Generally, CFD scales well with additional CPU cores, allowing studies to solve significantly faster. Moreover, CFD performance benefits greatly from higher memory bandwidth, making these factors critical for optimal results.

For our testing, we selected three workloads from the SPECworkstation 3.1 benchmark and one from SPECworkstation 4.0. The CFD tests included Rodinia (representing compressible flow), WPCcfd (modelling combustion and turbulence), and OpenFoam with XiFoam solver. For FEA, we used CalculiX, which simulates the internal temperature of a jet engine turbine.

The Intel Core Ultra 9 285K claimed the top spot in all the tests. The AMD Ryzen 9 9950X followed in second place, except in the OpenFoam benchmark, where it was outperformed by the Ryzen 9 9800X3D— likely due to the additional cache.

Of course, for those deeply invested in simulation, high-end workstation processors, such as AMD Ryzen Threadripper Pro and Intel Xeon offer a significant advantage, thanks to their higher core counts and superior memory bandwidth. For a deeper dive, check out last year’s workstation special report: www.tinyurl.com/WSR24.

Reality modelling

Reality modelling is becoming prevalent in the AEC sector. Raw data captured by drones (photographs / video) and terrestrial laser scanners must be turned in point clouds and reality meshes — a process that is very computationally intensive.

We tested a range of workflows using three popular tools: Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture by Capturing Reality, a subsidiary of Epic Games.

As many of the workflows in these applications are multi-threaded, we were surprised that the 8-core AMD Ryzen 9800X3D outperformed the 16-core AMD Ryzen 9950X and 24-core Intel Core Ultra 9 285K in several tests. This is likely due to its significantly larger cache, but possibly down to its single CCD

design, which houses all 8 CPU cores.

In contrast, the 16-core AMD Ryzen 9950X, which is made up of two 8-core CCDs, may suffer from latency when cores from different CCDs need to communicate with each other. It will be interesting to see how the recently announced 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D compare.

The other point worth noting is the impact of memory speed. In some workflows we experienced dramatically faster computation with faster memory. Simultaneous multi-threading (SMT) also had an impact on performance.

We explore reality modelling in much more detail on page WS33, where you will also find all the benchmark results.

The verdict

For the past few years, Intel and AMD have been battling it out in the mainstream processor market. Intel has traditionally dominated single threaded and lightly threaded workflows like CAD, BIM, and reality modelling, while AMD has been the go-to choice for multithreaded rendering.

But the landscape is shifting. With the ‘Zen 5’ AMD Ryzen 9000 Series, AMD is starting to take the lead in areas where Intel once ruled supreme. For instance, in Solidworks CAD, AMD is delivering solid generation-on-generation performance improvements, while Intel appears to be stagnating. In fact, some workflows show the Intel Core Ultra 200S trailing behind older 14th Gen Intel Core processors.

That said, for most workstation users, AMD’s rising stock won’t mean much unless major OEMs like Dell, HP, and Lenovo start giving Ryzen the same level of attention they’ve devoted to AMD Ryzen Threadripper Pro. A lot will depend on AMD releasing Pro variants of the Ryzen 9000 Series to meet the needs of enterprise users.

For everyone else relying on specialist manufacturers, workstations with the latest Intel and AMD chips are already available. This includes AMD’s cacherich Ryzen 9000 X3D variants, which look particularly appealing for select workflows where having superfast access to a large pool of frequently used data makes them shine.

Scan 3XS

GWP-A1-C24 & GWP-A1-R32

Between these two attractive desktops, Scan has most bases covered in AEC and product development, from CAD/BIM and visualisation to simulation, reality modelling and beyond, writes Greg Corke

Specialist workstation manufacturers like Scan often stand out from the major OEMs, as they offer the very latest desktop processors. The Scan 3XS GWP-A1-C24 features the new “Arrow Lake” Intel Core Ultra 200S Series (with the C in the model name standing for Core) while the Scan 3XS GWP-A1-R32 offers the ‘Zen 5’ AMD Ryzen 9000 Series (R for Ryzen). In contrast, Dell, HP, and Lenovo currently rely on older 14th Gen Intel Core processors, while their AMD options are mostly limited to the high-end Ryzen Threadripper Pro 7000 Series.

Both Intel and AMD machines share several Corsair branded components, including 64 GB (2 x 32GB) of Corsair Vengeance DDR5 5600 memory, a 2TB Corsair MP700 Pro SSD, a Corsair Nautilus 360 cooler, and Corsair RM750e PSU.

The 2TB NVMe SSD delivers blazingly-fast read and write speeds combined with solid endurance. In CrystalDiskMark it delivered 12,390 MB/sec sequential read and 11,723 MB/sec sequential. Its endurance makes it wellsuited for intensive read / write workflows, such as reality modelling. Corsair backs this up with a five-year warranty or a rated lifespan of 1,400 total terabytes written (TBW), whichever comes first.

GWP-A1-C24

■ Intel Core Ultra 9 285K processor

(3.7 GHz, 5.7 GHz boost) (24 cores - 8 P-cores + 16 E-cores)

■ Nvidia RTX 2000 Ada Generation GPU (16 GB)

■ 64 GB (2 x 32 GB)

Corsair Vengeance DDR5 5,600 memory

■ 2TB Corsair MP700 Pro SSD

■ Asus Prime Z890-P motherboard

■ Corsair Nautilus 360 cooler

■ Corsair RM750e Power Supply Unit

■ Fractal North Charcoal Mesh case (215 x 469 x 447mm)

■ Microsoft Windows 11 Pro 64-bit

■ 3 Years warranty –1st Year Onsite, 2nd and 3rd Year RTB (Parts and Labour)

Ada Generation. This hardware pairing is well-suited to CAD, BIM, and entry-level viz workflows, as well as CPUintensive tasks like point cloud processing, photogrammetry, and simulation.

The all-in-one (AIO) liquid CPU cooler features a 360mm radiator, bolted on to the top of the chassis. Cooled by three low-duty RS120 fans both machines run cool, and remain very quiet, even when rendering for hours.

■ £2,350 (Ex VAT)

■ scan.co.uk/3xs

The Nvidia RTX 2000 Ada Generation is a compact, low-profile, dual-slot GPU featuring four mini DisplayPort connectors. With a conservative power rating of 70W, it gets all its power directly from the Asus Prime Z890-P motherboard’s PCIe slot. Despite its modest power requirements, it delivered impressive graphics performance in CAD and BIM, easily handling all our 3D modelling tests in Solidworks and Revit. 16 GB of onboard memory allows it to work with fairly large visualisation datasets as well.

Intel Core Ultra 200S Series

Our Intel-based Scan 3XS GWP-A1-C24 workstation was equipped with a top-end Intel Core Ultra 9 285K CPU and an entrylevel workstation GPU, the Nvidia RTX

In real-time visualisation software, don’t expect silky smooth navigation with large models at high resolutions. However, 3D performance is still acceptable. In Chaos Enscape, for example, we got 14 frames per second (FPS) at 4K with our demanding school project test scene.

From the exterior, both Scan workstations share the same sleek design, housed in the Fractal North Charcoal Mesh case with dark walnut wood strips on the front. While wood accents in PC cases can sometimes feel contrived, this ATX Mid-Tower strikes an excellent balance between form and function. Its elegant, minimalist aesthetic enhances the overall visual appeal without compromising airflow. Behind the wooden façade, an integrated mesh ensures efficient ventilation, with air drawn in through the front and expelled through the rear and top. Adding to its refined look, the case has understated brass buttons and ports on the top, including two USB 3.0 Type-A, one USB 3.1 Gen2 Type-C, as well as power button, mic, and HD audio ports.

The downside of the chassis is that it’s relatively large, measuring 215 x 469 x 447mm (W x H x D). However, this spacious design makes accessing internal components incredibly easy, a convenience further enhanced by Scan’s excellent trademark cable management.

Outputting ray trace renders in KeyShot, V-Ray and Twinmotion was noticeably slower compared to more powerful Nvidia RTX GPUs. That said, it’s still a viable solution if you’re willing to wait. In Twinmotion, for example it cranked out five 4K path traced renders in 1,100 seconds, just under twice as long as it took the Nvidia RTX 4500 Ada Generation in Scan’s Ryzen-based workstation. In CPU workflows, the Intel Core Ultra 9 285K CPU delivered mixed results. While it outperformed the AMD Ryzen 9 9950X in a few specific workflows (as detailed in our indepth article on page WS10), the performance gains over 14th Gen Intel Core processors, which launched in Q4 2023, were relatively minor. In fact, in some workflows, it even lagged behind Intel’s previous generation an and aesthetic appeal

as it took the Nvidia RTX 4500 Ada Generation in Scan’s Ryzen-based

(as detailed in our inIntel Core processors, which launched in Q4 2023, were relatively

flagship mainstream CPU, the Intel Core i9-14900K.

One advantage that Scan’s Intel workstation holds over its AMD counterpart is in memory performance. Both machines were configured with 64 GB of DDR5 RAM running at 5,600 MT/s. However, when memory is increased to 128 GB, filling all four DIMM slots, the memory clock speed must be reduced to keep everything stable. On the Intel system, it only drops a little, down to 4,800 MT/s, but on the AMD system, it’s much more significant, falling to 3,600 MT/s. This reduction can have a notable impact on performance in memoryintensive tasks like simulation and reality modelling, giving the Intel system an edge when working with large datasets in select workflows.

AMD Ryzen 9000 Series

Our AMD-based Scan 3XS GWPA1-R32 workstation is set up more for visualisation, with an Nvidia RTX 4500 Ada Generation GPU (24 GB) paired with the top-end AMD Ryzen 9 9950X CPU.

The full length double height Nvidia GPU is rated at 210W, so must draw some of its power directly from the 750W power supply unit (PSU). It comes with four DisplayPort connectors.

The RTX 4500 Ada marks a big step up from the RTX 2000 Ada. In real-time viz software Enscape we got double the frame

workstation

rates at 4K resolution (28.70 FPS), and more than double the performance in most of our ray trace rendering tests. With 50% more on-board memory, you also get more headroom for larger viz datasets.

The CPU performance of the system was equally impressive. While the previous generation Ryzen 7000 Series enjoyed a lead over its Intel equivalent in multi-threaded ray tracing, it lagged behind in single threaded workflows. But with the Ryzen 9000 Series that’s no longer the case. AMD has significantly improved single threaded performance gen-on-gen, while Intel’s performance has stagnated a little. It means AMD is now sometimes the preferred option in a wider variety of workflows.

But the Scan 3XS GWP-A1-R32 is not without fault. In select reality modelling workflows, it was significantly slower than its Intel counterpart. We expect this is down to its dual chiplet (CCD) design, something we explore in more detail on page WS10

Also, as mentioned earlier, those that need more system memory will have to accept significantly slower memory speeds on AMD than with Intel. This can impact performance dramatically. When aligning images in Capturing Reality, for instance, going from 64 GB (5,600 MT/s) to 128 (3,600 MT/s) on the AMD workstation, saw computation times increase by as much as 64%. And

in simulation software, OpenFoam CFD, performance dropped by 31%.

Conclusion

Both Scan 3XS workstations are impressive desktops, offering excellent performance housed in aesthetically pleasing chassis. The choice between Intel and AMD depends on the specific demands of your workflows.

In terms of CAD and BIM, performance is similar across both platforms, as shown in our benchmark charts on page WS25 For visualisation, AMD holds a slight edge, but this may not be a deciding factor if your visualisation tasks rely more on GPU computation rather than CPU computation.

When it comes to reality modelling, Intel may not always have the lead, but it offers more consistent performance across various tasks. Additionally, Intel’s support for faster memory at larger capacities could make a significant difference. With 128 GB, Intel can achieve noticeably faster memory speeds, which translates into potential performance gains in certain workflows.

Ultimately, both machines are fully customisable, allowing you to select the components that best match your specific needs. Whether you prioritise raw processing power, memory speed, or GPU performance, Scan offers flexibility to tailor the workstation to your requirements.

GWP-A1-R32

Review: Boxx Apexx A3

This compact desktop with liquidcooled ‘Zen 5’ AMD Ryzen 9000 Series processor and Nvidia RTX 5000 Ada Generation GPU is a powerhouse for design viz, writes Greg Corke

In the world of workstations, Boxx is somewhat unique. Through its extensive reseller channel, it has the global reach of a major workstation OEM, but the technical agility of a specialist manufacturer.

Liquid cooling is standard across many of its workstations, and you can always expect to see the latest processors soon after launch. And there’s a tonne to choose from. In addition to workstation staples like Intel Core, Intel Xeon, AMD Ryzen Threadripper Pro, and (to a lesser extent) AMD Ryzen, Boxx goes one step further with AMD Epyc, a dual socket processor typically reserved for servers. The company also stands out for its diverse range of workstation form factors, including desktops, rack-mounted systems, and high-density datacentre solutions.

Boxx played a key role in AMD’s revival in the workstation market, debuting the AMD Ryzen-powered Apexx A3 in 2019.

The latest version of this desktop workstation may look identical on the outside, but inside, the new ‘Zen 5’ AMD Ryzen 9000 Series chip is a different beast entirely. 2019’s ‘Zen 2’ AMD Ryzen 3000 Series stood out for its multithreaded performance but fell short of Intel in single-threaded tasks critical for CAD and BIM. Now, as we explore in our ‘Intel vs. AMD’ article on page WS10 , AMD has the edge in a much broader range of workflows.

workstation special report

Ryzen 9000-based workstation in this report — the Scan 3XS GWP-A1-R32 - which we review on page WS16. The chassis offers several practical features. The front mesh panel easily clips off, providing access to a customerreplaceable filter. The front I/O panel is angled upward for convenient access to the two USB 3.2 Gen 2 (Type-A) ports and one USB 3.2 Gen 2 (Type-C) port. Around the back, you’ll find an array of additional ports, including two USB 4.0 (Type-C), three USB 3.2 Gen 1 (Type-A), and five USB 3.2 Gen 2 (Type-A).

For connectivity, there’s fast 802.11ab Wi-Fi 7 with rearmounted antennas, although most users — particularly those working with data from a central server — are likely to utilise the 5 Gigabit Ethernet LAN for maximum speed and reliability.

Product spec

■ AMD Ryzen 9 9950X processor (4.3 GHz, 5.7 GHz boost) (16-cores, 32 threads)

■ 96 GB (2 x 48 GB) Crucial DDR5 memory (5,600 MT/s)

■ 2TB Crucial T705 NVMe PCIe 5.0 SSD

■ Asrock X870E Taichi motherboard

■ Nvidia RTX 5000 Ada Generation GPU (32 GB)

■ Asetek 624T-M2 240mm All-in-One liquid cooler

■ Boxx Apexx A3 case (174 x 388 x 452mm)

■ Microsoft Windows 11 Pro

■ 3 Year standard warranty

■ USD $8,918 (starting at $3,655)

■ www.boxx.com www.boxx-tech.co.uk

The chassis layout is different to most other workstations of this type, with the motherboard flipped through 180 degrees, leaving the rear I/O ports at the bottom and the GPUs at the top — upside down.

To save space, the power supply sits almost directly in front of the CPU. This wouldn’t be possible in an air-cooled system, because the heat sink would get in the way. But with the Boxx Apexx A3, the CPU is liquid cooled, and the compact All-in-one (AIO) Asetek closed loop cooler draws heat away to a 240mm radiator, located at the front of the machine.

The Boxx Apexx A3 is crafted from aircraft-grade aluminium, delivering a level of strength that surpasses off-theshelf cases used by many custom manufacturers. Considering it can host up to two high-end GPUs, it’s surprisingly compact, coming in at 174 x 388 x 452mm, significantly smaller than the other AMD

Our test machine came with the 16core AMD Ryzen 9 9950X, the flagship model in the standard Ryzen 9000 Series. Partnered with the massively powerful Nvidia RTX 5000 Ada Generation GPU, this workstation screams design visualisation. And it has some serious clout.

Our test machine’s focus on GPU computation means the AMD Ryzen 9 9950X’s 16 cores may spend a good amount of time under utilised. Opting for a CPU with fewer cores could save you some cash, though it would come with a slight reduction in single-core frequency.

As it stands, the system delivers impressive CPU benchmark scores across CAD, BIM, ray-trace rendering, and reality modelling. However, in some tests, it was narrowly outperformed by the 3XS GWPA1-R32 and when pushing all 16 cores to their limits in V-Ray, fan noise was a little bit more noticeable (although certainly not loud).

Boxx configured our test machine with 96 GB of Crucial DDR5 memory, carefully chosen to deliver the maximum capacity with the fastest performance. With two 48 GB modules, it can run at 5,600 MT/s. Anything above that, up to a maximum of 192 GB, would see speeds drop significantly.

Rounding out the specs is a 2TB Crucial T705 SSD, the fastest PCIe 5.0 drive we’ve tested. It delivered exceptional sequential read/write speeds in CrystalDiskMark, clocking in at an impressive 14,506 MB/s read and 12,573 MB/s write — outpacing the Corsair MP700 Pro in the Scan 3XS workstation. However, it’s rated for 1,200 total terabytes written (TBW), giving it slightly lower endurance.

The Asrock X870E Taichi motherboard includes room for a second SSD, while the chassis features two hard disk drive (HDD) cradles at the top. However, with modern SSDs offering outstanding price, performance, these cradles are likely to remain empty for most users.

In Twinmotion it delivered five 4K path traced renders in a mere 342 seconds and in Lumion four FHD ray trace renders in 70 seconds. That’s more than three times quicker than an Nvidia RTX 2000 Ada. And with 32 GB of onboard memory to play with, the GPU can handle very complex scenes.

The verdict

The Boxx Apexx A3 is a top-tier compact workstation, fully customisable and built to order, allowing users to select the perfect combination of processors to meet their needs. Among specialist system builders, Boxx is probably the closest competitor to the major workstation OEMs like Dell, HP, and Lenovo. However, none of these major players have yet released an AMD Ryzen 9000-based workstation — and given past trends, there’s no guarantee they will. This gives Boxx a particular appeal, especially for companies seeking a globally available product powered by the latest ‘Zen 5’ AMD Ryzen processors.

Allies and Morrison

Architype

Aros Architects

Augustus Ham Consulting

B + R Architects

Cagni Williams

Coffey Architects

Corstorphine & Wright

Cowan Architects

Cullinan Studio

DRDH Eight Versa

Elevate Everywhere 5plus

Flanagan Lawrence Focus on Design Gillespies

GRID Architects Grimshaw

Hawkins/Brown

HLM Architects

Hopkins Architects

Hutchinson & Partners

John McAslan & Partners

Lyndon Goode Architects

Makower Architects

Marek Wojciechowski Architects

Morris + Company

PLP Architecture

Plowman Craven

Rolfe Judd

shedkm

Studio Egret West

Via

Weston Williamson + Partners

Why are so many organisations adopting our virtual workstations?

High performance

with dedicated NVIDIA GPUs and AMD Threadripper CPUs we provide workstation level performance for the most demanding users

More sustainable

our vdesks are 62% less carbon impactful than a similarly specified physical workstation

More secure

centralised virtual resources are easier to secure than dispersed infrastructure

More efficient deployment and management is vastly quicker than with a physical estate

More

agile our customers are better able to deal with incoming challenges and opportunities

Cost accessible

we are much less expensive and much more transparent than other VDI alternatives

www.inevidesk.com info@inevidesk.com

Review: Armari

Magnetar MM16R9

This compact desktop workstation, built around the gamer-favourite Ryzen X3D processor, is also a near perfect fit for reality modelling, writes Greg Corke

The first AMD Ryzen processor to feature AMD 3D V-Cache technology launched in 2022. Since then, newer versions have become the processors of choice for hardcore gamers. This is largely thanks to the additional cache — a superfast type of memory connected directly to the CPU — which can dramatically boost performance in certain 3D games. As we discovered in our 2023 review of the ‘Zen 4’ AMD Ryzen 9 7950X3D, that applies to some professional workflows too.

With the launch of the ‘Zen 5’ AMD Ryzen 9000 Series, AMD has opted for a staggered release of its X3D variants. The 8-core AMD Ryzen 7 9800X3D was first out the blocks in November 2024. Now the 12-core AMD Ryzen 9 9900X3D and 16-core AMD Ryzen 9 9950X3D have just been announced and should be available soon.

fans spin up during all-core tasks like rendering in V-Ray, the noise is perfectly acceptable for an office environment.

But this is not a workstation you’d buy for visualisation or in indeed CAD or BIM. For those workflows, the non-X3D AMD Ryzen 9000 Series processors would be a better fit, and are also available as options for this machine. For instance, the 16-core AMD Ryzen 9 9950 has a significantly higher singlecore frequency to accelerate CAD, and double the number of cores to cut render times in half.

The X3D chips shine in tasks that benefit from fast access to large amounts of cache. As we detail in our dedicated article on page WS34 , reality modelling is one such workflow. In fact, in many scenarios, Armari’s compact desktop workstation not only outperformed the 16-core AMD Ryzen 9 9950 processor but the 96-core AMD Ryzen Threadripper Pro 7995WX as well.

Product spec

■ AMD Ryzen 7 9800X3D processor (4.7 GHz, 5.2 GHz boost) (8-cores, 16 threads)

■ 96 GB (2 x 48 GB) Corsair Vengeance DDR5-6000C30 EXPO memory (5,600 MT/s)

■ 2TB Samsung 990 Pro M.2 NVMe PCIe4.0 SSD

■ ASUS ROG Strix AMD B650E-I Gaming Wifi Mini-ITX Motherboard

■ AMD Radeon Pro W7500 GPU (8 GB)

■ Armari SPXA6815NGR 280mm AIO+NF-P14 redex CPU Cooler

■ Coolermaster MasterBox NR200P Mini ITX case (376 x 185 x 292mm)

■ Microsoft Windows 11 Pro

■ Armari 3 Year basic warranty

■ £1,999 (Ex VAT)

■ www.armari.com

anything above 96 GB requires the memory speed to be lowered to 3,600 MT/s. This reduction can lead to noticeable performance drops in some memory-intensive reality modelling workflows.

Armari, true to form, is continually looking for ways to improve performance. Just before we finalised this review, the company sent an updated machine with 48 GB (2 x 24 GB) of faster 8,000 MT/s G.Skill Tri Z5 Royal Neo DDR5 memory, paired with the newer Asus AMD ROG Strix B850-I ITX motherboard.

UK manufacturer Armari has been a long term advocate of AMD Ryzen processors and has now built a brand-new workstation featuring the AMD Ryzen 9800X3D. With a 120W TDP, rising to 162W under heavy loads, it’s relatively easy to keep cool. This allows Armari to fit the chip into a compact Coolermaster MasterBox NR200P Mini ITX case, which saves valuable desk space. Even though the components are crammed in a little, the 280mm AIO CPU cooler ensures the system runs quiet. While the

However, the workstation is not quite the perfect match for mainstream reality modelling. While the AMD Radeon Pro W7500 GPU is great for CAD, it’s incompatible with select workflows in Leica Cyclone 3DR and RealityCapture from Epic Gamesthose accelerated by Nvidia CUDA. Here, the Nvidia RTX A1000, an equivalent 8 GB GPU, would be the better option.

However, the workstation is quite the for mainstream reality by Nvidia CUDA. Here, the Nvidia RTX A1000, an equivalent the better option.

In our tests, this new setup provided a slight (1-2%) performance boost in some reality modelling tasks. However, since our most demanding test requires 60 GB of system memory and 48 GB is the current maximum capacity for this memory speed, it’s hard to fully gauge its potential. For the time-being, the higherspeed memory feels like a step toward future improvements, pending the release of larger-capacity kits.

Having more cache probably isn’t the only reason why the 9800X3D procesor excels. Because the chip is made from a single CCD, there’s less latency between cores. We delve into this further in our reality modelling article on page WS34. It will be fascinating to see how the 12-core and 16-core X3D chips compare.

If we were to look for faults, it would be that the machine’s top panel connections are USB-A only, which is too slow to transfer terabytes of reality capture data quickly, but Armari tells us that production systems will have a front USB-C Gen 2x2 port.

The test machine came with 96 GB (2 x 48 GB) of Corsair Vengeance DDR5-6000C30 Expo memory, running at 5,600 MT/s. While the system supports up to 192 GB,

with 96 GB (2 x 48 GB) of

Overall, Armari has done it again with another outstanding workstation. It’s not just powerful — it’s compact and portable as well — which could be a big draw for construction firms that need to process reality data while still on site.

‘‘
The Armari Magnetar MM16R9 is not just powerful — it’s compact and portable — which could be a big draw for construction firms that need to process reality data on site
16-core AMD Ryzen
TDP, rising to 162W under compact Coolermaster MasterBox

Review: Comino Grando workstation RM

This desktop behemoth blurs the boundaries between workstation and server and, with an innovative liquid cooling system, delivers performance like no other, writes Greg Corke

Firing up a Comino Grando feels more like prepping for take-off than powering on a typical desktop workstation. Pressing both front buttons activates the bespoke liquid cooling system, which then runs a series of checks, before booting into Windows or Linux.

The cooling system is an impressive feat of precision engineering. Comino manufactures its own high-performance water blocks out of copper and stainless steel. And these are not just for the CPU. Unlike most liquid cooled workstations, the Comino Grando takes care of the GPUs and motherboard VRMs as well. It’s only the system memory, and storage that are cooled by air in the traditional way.

Not surprisingly, this workstation is all about ultimate performance. This is exemplified by the 96-core AMD Threadripper Pro 7995WX processor, which Comino pushes to the extreme. While most air-cooled Threadripper Pro workstations keep the processor at its stock 350W, Comino cranks it up to an astonishing 900W+, with the CPU settling around 800W during sustained multi-core workloads. That’s a lot of electricity to burn.

The result, however, is truly astonishing all-core frequencies. During rendering in Chaos V-Ray, the 96-core chip initially hit an incredible 4.80 GHz, before landing on a still-impressive 4.50 GHz. Even some workstations with fewer cores struggle to

maintain these all core speeds.

Not surprisingly, the test scores were off the chart. In the V-Ray 5.0 benchmark, it delivered an astonishing score of 145,785 — a massive 42% faster than an air-cooled Lenovo ThinkStation P8, with the same 96-core processor.

The machine also delivered outstanding results in our simulation benchmarks. Outside of dual Intel Xeon Platinum workstations — which Comino also offers — it’s hard to imagine anything else coming close to its performance.

As you might expect, running a machine like this generates some serious heat. Forget portable heaters — rendering genuinely became the best way to warm up my office on a chilly winter morning.

While the CPU delivers a significant performance boost, the liquid cooled GPUs run at standard speeds. Comino replaces the original air coolers with a slim water block, a complex process that’s explained well in this video (www.tinyurl.com/Comino-RTX)

Product spec

■ AMD Ryzen Threadripper Pro 7995WX processor (2.5 GHz, 5.1 GHz boost) (96-cores, 192 threads)

■ 256 GB (8 x 32 GB) Kingston RDIMM DDR5 6400Mhz CL32 REG ECC memory

■ 2TB Gigabyte Aorus M.2 NVMe 2280 (PCIe 4.0) SSD

■ Asus Pro WS WRX90E-SAGE motherboard

■ 2 x Nvidia RTX 6000 Ada Gen GPU (48 GB)

■ Comino custom liquid cooling system

■ Comino Grando workstation chassis 439 x 681 x 177mm)

■ Microsoft Windows 11 Pro

■ 2-year warranty (upgradable to up to 5 years with on-site support)

■ £31,515 (Ex VAT)

4 x 4TB M.2 SSD RAID 0 upgrade

■ £33,515 (Ex VAT)

With 2 x AMD Radeon Pro W7900 instead of 2 x Nvidia RTX 6000 Ada

■ £24,460 (Ex VAT)

■ www.grando.ai

This design allows each GPU to occupy just a single PCIe slot on the motherboard, compared to the two or three slots required by the same high-end GPU in a typical workstation. Normally, modifying a GPU like this would void the manufacturer’s warranty. However, Comino offers a full two years, covering the entire workstation, with the option to extend up to five.

The machine can accommodate up to seven GPUs — though these are limited to mid-range models. For high-end professional GPUs, support is capped at four cards, although Comino offers a similar server with more power and

noisier fans that can host more. Options include the Nvidia RTX 6000 Ada Generation (48 GB), Nvidia L40S (48 GB), Nvidia H100 (80 GB), Nvidia A100 (80 GB), and AMD Radeon Pro W7900 (48 GB). Keen observers will notice many of these GPUs are designed for compute workloads, such as engineering simulation and AI. Most notably, a few are passively cooled, designed for datacentre servers, so are not available in traditional workstations.

For consumer GPUs, the system can handle up to two cards, such as the Nvidia GeForce RTX 4090 (24 GB) and AMD Radeon 7900 XTX (24 GB). Comino is also working on a solution for 2 x Nvidia H200 (141 GB) or 2 x Nvidia GeForce RTX 5090 (32 GB).

Our test machine was equipped with a pair of Nvidia RTX 6000 Ada Generation GPUs. These absolutely ripped through our GPU rendering benchmarks, easily setting new records in tests that are multi-GPU aware. Compared to a single Nvidia RTX 6000 Ada GPU, V-Ray was around twice as fast. The gains in other apps were less dramatic, with an 83% uplift in Cinebench and 65% in KeyShot.

Liquid magic

Comino’s liquid cooling system is custom-built, featuring bespoke water blocks and a 450ml coolant reservoir with integrated pumps.

Coolant flows through high-quality flexible rubber tubing, passing from component to component before completing the loop via a large 360mm radiator located at the rear of the machine.

1

2

Positioned alongside this radiator are three (yes, three) 1,000W SFX-L PSUs.

The system is cooled by a trio of Noctua 140mm 3,000 RPM fans, which drive airflow from front to back. Cleverly, the motherboard is housed in the front section of the chassis, ensuring the coldest air passes over the RAM and other aircooled components.

surprisingly straightforward.

Swapping out a GPU, while more intricate than on a standard desktop, isn’t as challenging as you might expect.

For upgrades, Comino can ship replacement GPUs pre-fitted with custom cooling blocks and rubber tubes. For our testing, Comino supplied a pair of AMD Radeon Pro W7900s. Despite their singleslot design,

Users are given control over the fans. Using the buttons on the front of the machine, one can select from max performance, normal, silent, or super silent temperature profiles — each responding exactly how you’d expect in terms of acoustics.

process easy, with colour-coded blue and red connectors for cold and warm lines. Thanks to Comino’s no-spill design, the tubes come pre-filled with coolant, so there’s no need to add more after installation. (If you’re curious about the details, Comino provides a step-by-step guide in this video - www.tinyurl.com/Comino-GPU). Naturally, coolant evaporates over time and will need occasional topping up. Comino recommends checking levels every three months, which is easy to do via the reservoir window on the front panel. A bottle of coolant is included in the box for convenience.

As for memory and storage, they’re aircooled, making their maintenance no different from a standard desktop workstation.

All of our testing was conducted in ‘normal mode,’ where the noise level was consistent and acceptable. The ‘max performance’ mode, however, was much louder — better suited to a server room — and didn’t even show a significant performance boost. On the other hand, ‘super silent’ mode delivered an impressively quiet experience, with only a 3.5% drop in V-Ray rendering performance.

The front LED text display is where tech enthusiasts can geek out, cycling through metrics like flow rates, fan and pump RPM, and the temperatures of the air, coolant, and components. For a deeper dive, the Comino Monitoring System offers access to this data and more via a web browser.

Maintenance and upgrades

With such an advanced cooling system, the Comino Grando can feel a bit intimidating. Thankfully, end user maintenance is

these GPUs are deceptively heavy, weighing in at 1.9 kg each —significantly more than the 1.2 kg of a stock W7900 fitted with its standard cooler. It’s easy to see why a crossbar bracket is essential to keep these hefty GPUs securely in place.

Installing the GPU is straightforward: plug it into the PCIe slot, secure it with screws as usual, and then plumb in the cooling system. The twist-and-click Quick Disconnect Couplings (QDCs) make this

Our system was equipped with 256 GB of high-speed Kingston DDR5 6,400 MHz CL32 REG ECC memory, operating at 4,800 MT/s. All eight slots were fully populated with 32 GB modules, maximising the Threadripper Pro processor’s 8-channel memory architecture for peak performance. For workloads requiring massive datasets, the system can support up to an impressive 2 TB of memory.

The included SSD is a standard 2TB Gigabyte AORUS Gen4, occupying one of the four onboard M.2 slots. However, there’s plenty of scope for performance upgrades. One standout option is the HighPoint SSD7505 PCIe 4.0 x16

4-channel NVMe RAID controller, which can be configured with four 4TB PNY XLR8 CS3140 M.2 SSDs in RAID 0 for blisteringly fast read/write speeds.

Rack ‘em up

The Comino Grando blurs the boundaries between workstation and server. It’s versatile enough to fit neatly under a desk or mount in a 4U rack space (rack-mount kit included).

What’s more, with the Asus Pro WS WRX90E-SAGE

SE motherboard’s integrated BMC chip with IPMI (Intelligent Platform Management Interface) for out-ofband management, the Comino Grando can be fully configured as a remote workstation.

The verdict

The Comino Grando is, without question, the fastest workstation we’ve ever tested, leaving air-cooled Threadripper Pro machines from major OEMs in its wake. The only close contender we’ve seen is the Armari Magnetar M64T7, equipped with a liquid-cooled 64-core AMD Ryzen Threadripper 7980X CPU (See our 2024 Workstation Special Report - www.tinyurl.

‘‘ With support for datacentre GPUs, the Comino Grando can potentially transform workflows by giving simulation and AI specialists ready access to vast amounts of computational power on the desktop

Perhaps its most compelling feature, however, is its GPU flexibility. The Nvidia RTX 6000 Ada Generation is a staple for high-end workstations, but very few can handle four — a feat typically reserved for dual Xeons. What’s more, with support for datacentre GPUs, the Comino Grando can potentially transform workflows by giving simulation and AI specialists ready access to vast amounts of computational power on the desktop.

However, you’ll need some serious muscle to lift it into the rack — it’s by far the heaviest workstation we’ve ever encountered. It will come as no surprise to learn that the system arrived on a wooden pallet.

com/WSR24). We wonder how Armari’s 96core equivalent would compare.

While the Comino Grando’s multicore performance is remarkable, what truly sets it apart from others is that it can operate in near-silence. The sheer level of engineering that has gone into this system is extraordinary, with superb build quality and meticulous attention to detail.

Of course, this level of performance doesn’t come cheap, but it can be seen as a smart investment in sectors like aerospace and automotive, where even the smallest optimisations really count.

Surprisingly, the Comino Grando isn’t significantly more expensive than an air-cooled equivalent. For instance, on dell.co.uk, a Dell Precision 7875 with similar specs currently costs just £1,700 less. However, two GPUs is the maximum and it would almost certainly come second in highly multi-threaded workloads.

● 6 The browser-based Comino Monitoring System offers real-time access to swathes of operating data

● 7 Comino CPU water block, made out of copper and stainless steel

● 8 The rear of the workstation, actually looks like the front

Workstations for arch viz

What’s the best GPU or CPU for arch viz? Greg Corke tests a variety of processors in six of the most popular tools – D5 Render, Twinmotion, Lumion, Chaos Enscape, Chaos V-Ray, and Chaos Corona

When it comes to arch viz, everyone dreams of a silky-smooth viewport and the ability to render final quality images and videos in seconds. However, such performance often comes with a hefty price tag. Many professionals are left wondering: is the added cost truly justified?

To help answer this question, we put some of the latest workstation hardware

Nvidia

through its paces using a variety of popular arch viz tools. Before diving into the detailed benchmark results on the following pages, here are some key considerations to keep in mind.

GPU processing

Real-time viz software like Enscape, Lumion, D5 Render, and Twinmotion rely on the GPU to do the heavy lifting. These tools offer instant, high-quality visuals

directly in the viewport, while also allowing top-tier images and videos to be rendered in mere seconds or minutes.

The latest releases support hardware ray tracing, a feature built into modern GPUs from Nvidia, AMD and Intel. While ray tracing demands significantly more computational power than traditional rasterisation, it delivers unparalleled realism in lighting and reflections.

GPU performance in these tools is typically evaluated in two ways: Frames Per Second (FPS) and render time. FPS measures viewport interactivity — higher numbers mean smoother navigation and a better user experience — while render time, expressed in seconds, determines how quickly final outputs are generated. Both metrics are crucial, and we’ve used them to benchmark various software in this article.

For your own projects, aim for a minimum of 24–30 FPS for a smooth and interactive viewport experience. Performance gains above this threshold tend to have diminishing returns, although we expect hardcore gamers might disagree. Display resolution is another critical factor. If your GPU struggles to maintain performance, reducing resolution from 4K to FHD can deliver a significant boost.

It’s worth noting that while some arch viz software supports multiple GPUs, this only affects render times rather than viewport performance. Tools like V-Ray, for instance, scale exceptionally well

DLSS - using AI to boost performance in real-time

Nvidia DLSS (Deep Learning Super Sampling) is a suite of AI-driven technologies designed to significantly enhance 3D performance (frame rates), in real-time visualisation tools.

Applications including Chaos Enscape, Chaos Vantage and D5 Render, have integrated DLSS to deliver smoother experiences,

and to make it possible to navigate larger scenes on the same GPU hardware.

DLSS comprises three distinct technologies, all powered by the Tensor Cores in Nvidia RTX GPUs:

Super Resolution: This boosts performance by using AI to render higher-resolution frames from lower-resolution

inputs. For instance, it enables 4K-quality output while the GPU processes frames at FHD resolution, saving core GPU resources without compromising visual fidelity.

DLSS Ray Reconstruction: This enhances image quality by using AI to generate additional pixels for intensive ray-traced scenes.

Frame Generation: This increases performance by using AI to interpolate and generate extra frames. While DLSS 3.0 could generate one additional frame, DLSS 4.0, exclusive to Nvidia’s upcoming Blackwellbased GPUs, can generate up to three frames between traditionally rendered ones. When these three

technologies work together, an astonishing 15 out of every 16 pixels can be AI-generated. DLSS 4.0 will soon be supported in D5 Render, promising transformative performance gains. Nvidia has demonstrated that it can elevate frame rates from 22 FPS (without DLSS 4.0) to an incredible 87 FPS.

D5 Render

Chaos Corona is a CPU-only renderer designed for arch viz It scales well with more CPU cores. But the 96-core Threadripper Pro 7995WX, despite having six times the cores of the 16-core AMD Ryzen 9 9950X and achieving an overclocked all-core frequency of 4.87 GHz, delivers only three times the performance.

Chaos V-Ray is a versatile photorealistic renderer, renowned for its realism. It includes both a CPU and GPU renderer. The CPU renderer supports the most features and can handle the largest datasets, as it relies on system memory. Performance scales efficiently with additional cores.

V-Ray GPU works with Nvidia GPUs. It is often faster than the CPU renderer, and can make very effective use of multiple GPUs, with performance scaling extremely well. However, the finite onboard memory can restrict the size of scenes. To address this, V-Ray GPU includes several memorysaving features, such as offloading textures to system memory. It also offers a hybrid mode where both the CPU and GPU work together, optimising performance across both processors.

with multiple GPUs, but in order to take advantage you’ll need a workstation with adequate power and sufficient PCIe slots to accommodate the GPUs.

GPU memory

The amount of memory a GPU has is often more critical than its processing power. In some software, running out of GPU memory can cause crashes or significantly slow down performance. This happens because the GPU is forced to borrow system memory from the workstation via the PCIe bus, which is much slower than accessing its onboard memory.

The impact of insufficient GPU memory depends on your workflow. For final renders, it might simply mean waiting longer for images or videos to finish processing. However, in a real-time viewport, running out of memory can make navigation nearly impossible. In extreme cases, we’ve seen frame rates plummet to 1-2 FPS, rendering the scene completely unworkable.

Fortunately, GPU memory and

processing power usually scale together. Professional workstation GPUs, such as Nvidia RTX or AMD Radeon Pro, generally offer significantly more memory than their consumer-grade counterparts like Nvidia GeForce or AMD Radeon. This is especially noticeable at the lower end of the market. For example, the Nvidia RTX 2000 Ada, a 70W GPU, is equipped with 16 GB of onboard memory.

For real-time visualisation workflows, we recommend a minimum of 16 GB, though 12 GB can suffice for laptops. Anything less could require compromises, such as simplifying scenes and textures, reducing display resolution, or lowering the quality of exported renders.

CPU processing

CPU rendering was once the standard for most arch viz workflows, but today it often plays second fiddle to GPU rendering. That said, it remains critically important for certain software. Chaos Corona, a specialist tool for arch viz, relies entirely on the CPU for rendering. Meanwhile,

Chaos V-Ray gives users the flexibility to choose between CPU and GPU. Some still favour the CPU renderer for its greater control and the ability to harness significantly more memory when paired with the right workstation hardware. For example, while the top-tier Nvidia RTX 6000 Ada Generation GPU comes with an impressive 48 GB of on-board memory, a Threadripper Pro workstation can support up to 1 TB or more of system memory.

CPU renderers scale exceptionally well with core count — the more cores your processor has, the faster your renders. However, as core counts increase, frequencies drop, so doubling the cores won’t necessarily cut render times in half. Take the 96-core Threadripper Pro 7995WX, for example. It’s a powerhouse that’s the ultimate dream for arch viz specialists. But does it justify its price tag—nearly 20 times that of the 16-core AMD Ryzen 9950X—for rendering performance that’s only 3 to 4 times faster? As arch viz becomes more prevalent across AEC firms, that’s a tough call for many.

Chaos V-Ray 6
Corona 10
Benchmark scene
V-Ray 6.0 CPU render
V-Ray 6.0 GPU RTX render

workstation special report

D5 Render is a real-time arch viz tool, based on Unreal Engine. Its ray tracing technology is built on DXR, requiring a GPU with dedicated ray-tracing cores from Nvidia, Intel, or AMD.

The software uses Nvidia DLSS, allowing Nvidia GPUs to boost real time performance. Multiple GPUs are not supported.

The benchmark uses 4 GB of GPU memory, so all GPUs are compared on raw performance alone. Real time scores are capped at 60 FPS.

Enscape is a very popular tool for real-time arch viz. It supports hardware ray tracing, and also Nvidia DLSS, but not the latest version.

For testing we used an older version of Enscape (3.3). This had some incompatibility issues with AMD GPUs, so we limited our testing to Nvidia. Enscape 4.2,

Lumion is a real-time arch viz tool known for its exterior scenes in context with nature.

The software will benefit from a GPU with hardware raytracing, but those with older GPUs can still render with rasterisation.

Our test scene uses 11 GB of GPU memory, which meant the 8 GB GPUs struggled. The Nvidia RTX A1000 slowed down, while the AMD Radeon Pro W7500 & W7600 caused crashes. The high-end AMD GPUs did OK against Nvidia, but slowed down in ray tracing.

the latest release, supports AMD. We focused on real time performance, rather than time to render. The gap between the RTX 5000 Ada and RTX 6000 Ada was not that big. Our dataset uses 11 GB of GPU memory, which caused the software to crash when using the Nvidia RTX A1000 (8GB).

Snowdon Tower Revit sample project
Enscape 3.3 School sample project
Lumion Pro 2024

GPUs for Stable Diffusion

Architects and designers are increasingly using text-to-image AI models like Stable Diffusion. Processing is often pushed to the cloud, but the GPU in your workstation may already be perfectly capable, writes Greg Corke

Stable Diffusion is a powerful textto-image AI model that generates stunning photorealistic images based on textual descriptions. Its versatility, control and precision have made it a popular tool in industries such as architecture and product design.

One of its key benefits is its ability to enhance the conceptual design phase. Architects and product designers can quickly generate hundreds of images, allowing them to explore different design ideas and styles in a fraction of the time it would take to do manually.

Stable Diffusion relies on two main processes: inferencing and training. Most architects and designers will primarily engage with inferencing, the process of generating images from text prompts. This can be computationally demanding, requiring significant GPU power. Training is even more resource intensive. It involves creating a custom diffusion model, which can be tailored to match a specific architectural style, client preference, product type, or brand. Training is often handled by a single expert within a firm.

There are several architecture-specific tools built on top of Stable Diffusion or other AI models, which run in a browser or handle the computation in the cloud. Examples include AI Visualizer (for Archicad, SketchUp, and Vectorworks), Veras, LookX AI, and CrXaI AI Image Generator. While these tools simplify access to the technology, and there are

many different ways to run vanilla Stable Diffusion in the cloud, many architects still prefer to keep things local.

Running Stable Diffusion on a workstation offers more options for customisation, guarantees control over sensitive IP, and can turn out cheaper in the long run. Furthermore, if your team already uses real-time viz software, the chances are they already have a GPU powerful enough to handle Stable Diffusion’s computational demands. While computational power is essential for Stable Diffusion, GPU memory plays an equally important role. Memory usage in Stable Diffusion is impacted by several factors, including:

• Resolution: higher res images (e.g. 1,024 x 1,024 pixels) demand more memory compared to lower res (e.g. 512 x 512).

• Batch size: Generating more images in parallel can decrease time per image, but uses more memory.

• Version: Newer versions of Stable Diffusion (e.g. SDXL) use more memory.

• Control: Using tools to enhance the model’s functionality, such as LoRAs for fine tuning or ControlNet for additional inputs, can add to the memory footprint.

For inferencing to be most efficient, the entire model must fit into GPU

diffusion models – a guide for

memory. When GPU memory becomes full, operations may still run, but at significantly reduced speeds as the GPU must then borrow from the workstation’s system memory, over the PCIe bus.

This is where professional GPUs can benefit some workflows, as they typically have more memory than consumer GPUs. For instance, the Nvidia RTX A4000 professional GPU is roughly the equivalent of the Nvidia GeForce RTX 3070, but it comes with 16 GB of GPU memory compared to 8 GB on the RTX 3070.

Inferencing performance

To evaluate GPU performance for Stable Diffusion inferencing, we used the UL Procyon AI Image Generation Benchmark. The benchmark supports multiple inference engines, including Intel OpenVino, Nvidia TensorRT, and ONNX runtime with DirectML. For this article, we focused on Nvidia professional GPUs and the Nvidia TensorRT engine.

This benchmark includes two tests utilising different versions of the Stable Diffusion model — Stable Diffusion 1.5, which generates images at 512 x 512 resolution and Stable Diffusion XL (SDXL), which generates images at 1,024 x 1,024. The SD 1.5 test uses 4.6 GB of GPU memory, while the SDXL test uses 9.8 GB. In both tests, the UL Procyon benchmark generates a set of 16 images, divided into batches. SD 1.5 uses a batch size of 4, while SDXL uses a batch size of 1. A higher

benchmark score indicates better GPU performance. To provide more insight into real-world performance, the benchmark also reports the average image generation speed, measured in seconds per image. All results can be seen in the charts below.

Key takeaways

It’s no surprise that performance goes up as you move up the range of GPUs, although there are diminishing returns at the higher-end. In the SD 1.5 test, even the RTX A1000 delivers an image every 11.7 secs, which some will find acceptable.

The RTX 4000 Ada Generation GPU

Diffusion architectural images courtesy of James Gray. Image above and right generated with ModelMakerXL, a custom trained LoRA by Ismail Seleit. Recently, Gray has been exploring Flux, a next-generation image and video generator. He recommends a 24 GB GPU. Follow Gray @ www.linkedin.com/in/ james-gray-bim

looks to be a solid choice for Stable Diffusion, especially as it comes with 20 GB of GPU memory. The Nvidia RTX 6000 Ada Generation (48 GB) is around 2.3 times faster, but considering it costs almost six times more (£6,300 vs £1,066) it will be hard to justify on those performance metrics alone.

The real benefits of the higher end cards are most likely to be found in workflows where you can exploit the extra memory. This includes handling larger batch sizes, running more complex models, and, of course, speeding up training.

Perhaps the most revealing test result

Procyon AI Image Generation Benchmark results

comes from SDXL, as it shows what can happen when you run out of GPU memory. The RTX A1000 still delivers results, but its performance slows drastically. Although it’s just 2 GB short of the 10 GB needed for the test, it takes a staggering 13 minutes to generate a single image — 70 times slower than the RTX 6000 Ada.

Of course, AI image generation technology is moving at an incredible pace. Tools including Flux, Runway and Sora can even be used to generate video, which demands even more from the GPU.

When considering what GPU to buy now, it’s essential to plan for the future.

Stable Diffusion image courtesy of eddie mauro www.instagram.com/eddiemauro.design
Stable

Z by HP Boost: GPUs on demand

With HP’s new solution, workstation GPUs become shareable across the network, helping firms get the most out of their IT resources for AI training and inferencing, writes Greg Corke

Boosting your workstation’s performance by tapping into shared resources is nothing new.

Distributed rendering, through applications like V-Ray and KeyShot, allows users to harness idle networked computers for faster processing.

Z by HP Boost is a new take on this idea, with a specific focus on AI. The technology is primarily designed to deliver GPU power to those who need it, on-demand, by giving remote access to idle GPUs on the network. In short, it can turn a standard PC or laptop into a powerful GPUaccelerated workstation, extending the reach of AI to a much wider audience, and dramatically reduce processing time.

HP is primarily pitching Z by HP Boost at data scientists and AI developers for training or fine-tuning large language models (LLMs). However, Z by HP Boost is also well suited to inferencing, the application of the trained model to generate new results.

“We want companies, like architects, to both create their AI, fine tune their models, create custom models — those are big projects — but also create with AI, with the diffusion programs,” says Jim Nottingham, SVP & division president personal systems advanced compute and solutions, HP.

ing visuals based on an existing composition, such as a sketch or a screen grab of a CAD or BIM model.

To get the most out of Stable Diffusion design and architecture firms often finetune or create custom models tailored to specific styles. Training models is highly computationally demanding and is typically handled by a specialist within the firm. This person may already have access to a powerful workstation, equipped with multiple high-end GPUs. However, if that’s not the case, or they need more GPU power to accelerate a process that can take days, Z by HP Boost could be used to do the heavy lifting.

Inferencing in Stable Diffusion, where a pre-trained AI model is used to generate new images, is applicable to a much wider audience. While less computationally demanding than training, inferencing still needs serious GPU power, especially in terms of GPU memory, which often goes beyond what’s available in the GPUs typically used for CAD and BIM modelling in tools like Solidworks and Autodesk Revit.

given that Stable Diffusion is used mainly during the early design phases, meaning high-powered GPUs might be massively underutilised for most of the year.

Even if a local entry-level GPU does work with Stable Diffusion, generating an image can take several minutes (as demonstrated on page WS30 ). But with a high-end GPU like the Nvidia RTX 6000 Ada Generation this can be done in seconds. During the early design phase — especially when collaborating with clients and project teams — this speed advantage can be hugely beneficial, allowing for rapid iteration.

How Z by HP Boost works Firms can designate any number of GPUs on their network to be shared. This could be four high-performance Nvidia RTX 6000 Ada Generation or Nvidia A800 GPUs in a dedicated highend workstation like the HP Z8 Fury G5, or a single Nvidia RTX 2000 Ada Generation GPU in a compact system like the HP Z2 Mini G9. The only

ier for more users to tap

Z by HP Boost makes it easier for more users to tap into this power without needing to equip everyone with a super-

charged workstation.

AI image generation

Z by HP Boost can be used for many different AI workflows. It currently supports PyTorch and TensorFlow, two of the most widely used open-source deep learning frameworks.

In AEC and product development, one of the most interesting use cases is Stable Diffusion, an AI image generator that can be used for early-stage design ideation. The AI model can be used to rapidly generate images –photorealistic or stylised – from a simple prompt. It can also serve as a shortcut for traditional rendering, generat-

simple prompt. It can also serve as a

Having access to GPUs on-demand is particularly valuable,

Having access to particularly valuable,

requirement is that the GPUs are housed in an HP Z Workstation.

Firms may choose to set aside one or more dedicated GPU workstations as a shared resource. Alternatively, to make the most out of the sometimes-vast numbers of GPUs scattered throughout an organisation, they can add GPUs from the workstations of end users. Those GPUs don’t have to be completely idle; they can also be shared when the owner is only doing light tasks. As Nvidia GPUs and drivers are good at multitasking it’s feasible, in theory, to model in CAD or BIM while someone else sets the same GPU to work in Stable Diffusion.

The Z by HP Boost software is installed on both the client and host machines. There are no restrictions on the client device — the PC or laptop just needs to run either Windows or Linux.

It’s very easy to configure a GPU for sharing. On the host device, simply select a GPU and assign it to the appropriate pool. Once that’s done, anyone with the necessary permissions has access. All they must do is choose the GPU from a list and select the application they want to run.

Once they’ve grabbed a GPU, it’s essentially theirs until they release it. However, the owner of the host machine always retains the right to reclaim the GPU if they want.

To ensure resources are used efficiently, GPUs are automatically returned to the pool after a period of inactivity. The default timeout is four hours, but this can be changed. A warning will appear on the

client device before the GPU is reallocated.

If the host workstation has multiple GPUs inside, each can be assigned to a different user. Currently, it’s one remote user per GPU, but there are plans for GPU slicing, which will enable multiple users to share the power of a single GPU simultaneously.

IT managers can configure the sharing however they want and, as Nottingham explains, this process can be aided by monitoring how resources are used. “We would like to work with customers to profile what’s their typical usage and design their sharing pool based on that usage.

“And maybe they can change it over time – they set up this one for night-time, they set up this one for daytime, or this one for Wednesdays – there’s going to be a lot of flexibility that we deliver.”

Nottingham believes Z by HP Boost is most interesting when multiple workstations are connected – many to many. “You just create a fabric, so you have more [GPUs] available, all the time.” This, he says, gives you a big performance boost without having to double your fleet.

Z by HP Boost doesn’t have to be used locally. As many of the AI workflows are not sensitive to latency it also works OK remotely. However, the ideal solution for remote working, as Nottingham explains, is with remote graphics software HP Anyware. In theory, one could have an architect or engineer remoting into a HP Z2 Mini in the office for bread-and-butter CAD or BIM work, who could then use Z by HP Boost to access an idle GPU on the same network to run Stable Diffusion.

Our thoughts

Z by HP Boost offers an interesting proposition for design and engineering firms looking to roll out AI tools like Stable Diffusion to a broader audience.

By providing on-demand access to high-performance workstation GPUs, it allows firms to efficiently maximise their resources, utilising hardware that might otherwise sit idle under a desk, especially at night.

The alternative is equipping everyone with high-end GPUs or running everything in the cloud. Both options are expensive and cloud can also bring unpredictable costs.

Keeping things local also helps firms protect intellectual property, keeping proprietary designs and the models that are trained on their proprietary designs behind the firewall.

Additionally, Z by HP Boost enables teams to pool resources for AI development, offering a flexible solution for demanding projects.

Although Z by HP Boost is currently focused on AI, we see no reason why it couldn’t be used for other GPU-intensive tasks, such as reality modelling, simulation, or rendering. The absence of ‘AI’ in the product’s name may even suggest that this broader use is on the roadmap.

However, this would require buy-in from each software developer and could become complicated for workflows typically handled by dedicated clusters with fast interconnects.

It will be very interesting to see how this technology develops.

HP presenting Z by HP Boost at the HP Imagine event last year, showing a remote Nvidia RTX 6000 Ada Generation GPU accelerating Stable Diffusion

Workstations for reality modelling

Reality modelling is one of the most computationally demanding workflows in Architecture, Engineering and Construction (AEC). It involves the creation of digital models of physical assets by processing vast quantities of captured real-world data using technologies including laser scanning, photogrammetry and simultaneous localisation and mapping (SLAM).

Reality modelling has numerous applications, including providing context for new buildings or infrastructure, forming the basis for retrofit projects, or comparing “as-built” with “as-designed” for construction verification.

While there’s a growing trend to process captured data in the cloud, desktop processing remains the preferred method. Cloud can be costly, and uploading vast amounts of data — sometimes terabytes — is a significant challenge, especially when

Workstation

What’s the best CPU, memory and GPU to process complex reality modelling data? Greg Corke tests some of the latest workstation technology in Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture from Epic Games Below

working from remote construction sites with poor connectivity.

Processing reality capture data can take hours, making it essential to select the right workstation hardware. In this article, we explore the best processor, memory and GPU options for reality modelling, testing a variety of workflows in three of the most popular tools — Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture by Capturing Reality, a subsidiary of Epic Games.

Most AEC firms have tight hardware budgets and it’s easy to spend money in the wrong places, sometimes for very little gain. In some cases, investing in more expensive equipment can even slow you down!

Leica Cyclone 3DR

Leica Cyclone 3DR is a multi-purpose reality modelling tool, used for inspection, modelling and meshing. Processing is done

technology on test

Armari Magnetar workstation with AMD Ryzen 7 9800X3D CPU (8 cores), 96 GB DDR5 5,600 MT/s memory and AMD Radeon Pro W7500 GPU (see page WS20). Scan 3XS workstation with AMD Ryzen 9 9950X CPU (16 cores), 64 GB DDR5 5,600 MT/s memory or 128 GB DDR5 3,600 MT/s memory and Nvidia RTX 4500 Ada Generation GPU (see page WS16)

predominantly on the CPU and several tasks can take advantage of multiple CPU cores. Some tasks, including the use of machine learning for point cloud classification, are also optimised for GPU.

For testing we focused on four workflows: scan-to-mesh, analysis, AI classification and conversion.

Scan-to-mesh: Compared to point clouds, textured mesh models are much easier to understand and easier to share, not least because the files are much smaller.

In our ‘scan-to-mesh’ test, we record the time it takes to convert a dataset of a building — captured with a Leica BLK 360 scanner — into a photorealistic mesh model. The dataset comprises a point cloud with 129 million points and accompanying images.

The process is multi-threaded but, as with many reality capture workflows,

RTX A6000 GPU (see www. aecmag.com/workstations/ review-hp-z6-g5-a).

HP Z6 G5A workstation with AMD Threadripper Pro 7975WX CPU (32 cores), 128 GB DDR5 5,200 MT/s memory and Nvidia

Scan 3XS workstation with Intel Core Ultra 9 285K CPU (8 P-cores and 16 E-cores), 64 GB DDR5 5,600 MT/s memory and Nvidia RTX 2000 Ada Generation GPU (see page WS17).

Comino Grando workstation with overclocked AMD Threadripper Pro 7995WX CPU (96 cores), 256 GB DDR5 4,800 MT/s memory and Nvidia RTX

6000 Ada Generation GPU. (see page WS22).

We also tested a range of GPUs, including the Nvidia RTX A1000 (8 GB), RTX A4000 (16 GB), RTX 2000 Ada (16 GB), RTX 4000 Ada (20 GB), RTX 4500 Ada (24 GB) and RTX 6000 Ada (48 GB).

more CPU cores does not necessarily mean faster results. Other critical factors that affect processing time include the amount of CPU cache (a high-speed onchip memory for frequently accessed data), memory speed, and AMD Simultaneous Multithreading (SMT), a technology similar to Intel Hyper-Threading that enables a single physical core to execute multiple threads simultaneously. During testing, system memory usage peaked at 25 GB, which meant all test machines had plenty of capacity.

The most unexpected outcome was the 8-core AMD Ryzen 7 9800X3D outperforming all its competitors. It not only beat the 16-core AMD Ryzen 9 9950X and Intel Core Ultra 9 285K (8 performance cores and 16 efficient cores), but the multicore behemoths as well. With the 96core AMD Threadripper Pro 7995WX it appears to be a classic case of “too many cooks [cores] spoil the broth”!

The AMD Ryzen 7 9800X3D is a specialised consumer CPU, widely considered to be the fastest processor for 3D gaming thanks to its advanced 3D V-Cache technology. It boasts 96 MB of L3 cache, significantly more than comparative processors. This allows the CPU to access frequently-used data quicker, rather than having to pull it from slower system memory (RAM).

But we expect that having lots of fast cache is not the only reason why the AMD Ryzen 7 9800X3D comes out top in our

scan-to-mesh test – after all, Threadripper Pro is also well loaded, with the top-end 7995WX having 384 MB of L3 cache which is spread across its 96 cores. To achieve a high number of cores, modern processors are made up of multiple chiplets or CCDs. In the world of AMD, each CCD typically has 8 cores, so a 16core processor has two CCDs, a 32-core processor has four CCDs, and so on.

Communication between cores in different CCDs is inherently slower than cores within the same CCD, and since the AMD Ryzen 7 9800X3D is made up of a single CCD that has access to all that L3 cache, we expect this gives it an additional advantage. It will be interesting to see how the recently announced 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D compare. Both processors feature 128 MB of L3 cache and comprise two CCDs.

Simultaneous Multithreading (SMT) also has an impact on performance. With the AMD Ryzen 9 9950X, for example, disabling SMT in the BIOS cut processing time by as much as 15%. However, it had the opposite effect with the AMD Ryzen 7 9800X3D, increasing processing time by 32%.

Memory speed also has an impact on performance. The AMD Ryzen 9 9950X processor was around 7% slower when configured with 128 GB RAM running at 3,400 MT/sec than it was with 64 GB RAM running at the significantly faster 5,600 MT/sec.

Analysis: In our analysis test we compare a point cloud to a BIM model, recording the time it takes to calculate a colour map that shows the deviations between the two datasets. During testing, system memory usage peaked at 19 GB.

The process is multi-threaded, but certain stages only use a few cores. As with scan-to-mesh, more CPU cores does not necessarily mean faster results, and CPU cache, SMT and memory speed also play an important role. Again, the AMD Ryzen 7 9800X3D bagged first spot, completing the test 16% faster than its closest rival, the Intel Core Ultra 9 285K.

The big shock came from the 16-core AMD Ryzen 9 9950X, which took more than twice as long as the 8-core AMD Ryzen 7 9800X3D to complete the test. The bottleneck here is SMT, as disabling it in the BIOS, so each of the 16 cores only performs one task at a time, slashed the test time from 91 secs to 56 secs.

Getting good performance out of the Threadripper Pro processors required even more tuning. Disabling SMT on its own had a minimal impact, and it was only when the Cyclone 3DR executable was pinned to a single CCD (8 cores, 16 threads) that times came down. But this level of optimisation is probably not practical, not least because all workflows and datasets are different.

AI classification: Leica Cyclone 3DR features an AI-based auto-classification algorithm designed to ‘intelligently

Reality modelling data comes from multiple sources: the Leica BLK ARC autonomous laser scanning module riding steady on the Boston Dynamics Spot robot

classify’ point cloud data. The machine learning model has been trained on large amounts of terrestrial scan data and comes with several predefined models for classification.

The process is built around Nvidia CUDA and therefore requires an Nvidia GPU. However, the CPU is still used heavily throughout the process. We tested a variety of Nvidia RTX professional GPUs using an AMD Ryzen 9 9950X-based workstation with 64 GB of DDR5 memory.

The test records the time it takes to classify a point cloud of a building with 129 million points using the Indoor Construction Site 1.3 machine learning model. During testing, system memory usage peaked at 37 GB and GPU memory usage at a moderate 3 GB.

The big takeaway from our tests is that the CPU does the lion’s share of the processing. The Nvidia RTX GPU is essential, but only contributes modestly to the overall time. Indeed, there was very little difference between most of the Nvidia RTX GPUs and even the entry-level Nvidia RTX A1000 was only 22% slower than the significantly more powerful Nvidia RTX 4500 Ada.

Conversion: This simple test converts a Leica LGSx file into native Cyclone 3DR. The dataset comprises a point cloud of a highway alignment with 594 million points. During testing, system memory usage peaked at 11 GB.

As this process is largely single threaded it’s all about single core CPU performance. Here, the Intel Core Ultra 9 285K takes first place, closely followed by the AMD Ryzen 9 9950X in second. With a slightly slower peak frequency the AMD Ryzen 7 9800X3D comes in third. In this case, the

larger L3 cache appear to offer no benefit.

The Threadripper Pro 7975WX and Threadripper Pro 7995WX lag behind — not only because they have a lower frequency, but are based on AMD’s older ‘Zen 4’ architecture, so have a lower Instructions Per Clock (IPC).

Leica Cyclone Register 360

Leica Cyclone Register 360 is specifically designed for point cloud registration, the process of aligning and merging multiple point clouds into a single, unified coordinate system.

For testing, we used a 99 GB dataset of the Italian Renaissance-style ‘Breakers’ mansion in Newport, Rhode Island. It includes a total of 39 setups from a Leica RTC360 scanner, around 500 million points and 5K panos. We recorded the time it takes to import and register the data.

The process is multi-threaded, but to ensure stability the software allocates a specific number of threads depending on how much system memory is available. In 64 GB systems, the software allocates five threads while for 96 GB+ systems it’s six.

The Intel Core Ultra 9 285K processor led by some margin, followed by the 16core AMD Ryzen 9 9950X and 96-core Threadripper Pro 7995WX. Interestingly, this was the one test where the 8-core AMD Ryzen 7 9800X3D was not one of the best performers. However, as the GPU does a small amount of processing, and Leica Cyclone Register 360 has a preference for Nvidia GPUs, this could be attributed to the workstation having the entry-level AMD Radeon Pro W7500 GPU.

Notably, memory speed appears to play a crucial role in performance. The AMD Ryzen 9 9950X, configured with 128 GB of 3,400 MT/sec memory, was able to utilise six threads for the process, but was 20%

slower than when configured with 64 GB of faster 5,600 MT/sec memory, which only allocated five threads.

RealityCapture from Epic Games

RealityCapture, developed by Capturing Reality — a subsidiary of Epic Games — is an advanced photogrammetry software designed to create 3D models from photographs and laser scans. Most tasks are accelerated by the CPU, but there are certain workflows that also rely on GPU computation.

Image alignment in RealityCapture refers to the process of analysing and arranging a set of photographs or scans in a 3D space, based on their spatial relationships. This step is foundational in photogrammetry workflows, as it determines the relative positions and orientations of the cameras or devices that captured the input data.

We tested with two datasets scanned by R-E-A-L.iT, Leo Films, Drone Services Canada Inc, both available from the RealityCapture website.

The Habitat 67 Hillside Unreal Engine sample project features 3,199 images totalling 40 GB, 1,242 terrestrial laser scans totalling 90 GB, and uses up 60 GB of system memory during testing.

The Habitat 67 Sample, a subset of the larger dataset, features 458 images totalling 3.5 GB, 72 terrestrial laser scans totalling 3.35 GB, and uses up 13 GB of system memory.

The 32-core Threadripper Pro 7975WX took top spot in the large dataset test, with the AMD Ryzen 9 9950X, AMD Ryzen 7 9800X3D and 96-core AMD Threadripper Pro 7995WX not that far behind. Again, SMT needed to be disabled in the higher core count CPUs to get the best results.

The Habitat 67 Hillside Unreal Engine sample project in RealityCapture from Epic Games

Memory speed appears to have a huge impact on performance. The AMD Ryzen 9 9950X processor was around 40% slower when configured with 128 GB of RAM running at 3,400 MT/sec than it was with 64 GB running at the significantly faster 5,600 MT/sec.

Import laser scan: This process imports a collection of E57 format laser scan data and converts it into a RealityCapture point cloud with the .lsp file extension. Our test used up 13 GB of system memory.

Since this process relies heavily on single-threaded performance, single-core speed is what matters most. The Intel Core Ultra 9 285K comes out on top, followed closely by the AMD Ryzen 9 9950X. With

a slightly lower peak frequency, the AMD Ryzen 7 9800X3D takes third place. The Threadripper Pro 7975WX and 7995WX fall behind, not just due to lower clock speeds but also because they’re built on AMD’s older Zen 4 architecture, which has a lower Instructions Per Clock (IPC).

Reconstruction is a very compute intensive process that involves the creation of a watertight mesh. It uses a combination of CPU and Nvidia GPU, although there’s also a ‘preview mode’ which is CPU only.

For our testing, we used the Habitat 67 Sample dataset at ‘Normal’ level of detail. It used 46 GB of system memory and 2 GB of GPU memory.

With a variety of workstations with different processors and GPUs, it’s hard to pin down exactly which processor is best for this workflow — although the 96-core Threadripper Pro 7995WX workstation with Nvidia RTX 6000 Ada GPU came out top. To provide more clarity on GPUs, we tested a variety of add-in boards in the same AMD Ryzen 9 9950X workstation. There was relatively good performance scaling across the mainstream Nvidia RTX range.

Thoughts on processors / memory

The combination of AMD’s ‘Zen 5’ architecture, fast DDR5 memory, a single chiplet design, and lots of 3D V-Cache, looks to make the AMD Ryzen 7 9800X3D

Capturing Reality 1.5

processor a very interesting option for a range of reality modelling workflows — especially for those on a budget. The AMD Ryzen 7 9800X3D becomes even more interesting when you consider that it’s widely regarded to be for gamers. The chip is not offered by any of the major workstation OEMs — only specialist system builders like Armari.

However, before you rush out and part with your hard-earned cash, it is important to understand a few things.

1) The AMD Ryzen 7 9800X3D processor currently has a practical maximum capacity of 96 GB, if you want fast 5,600 MT/sec memory. This is an important consideration if you work with large datasets. If you run out of memory, the processor will have to swap data out to the SSD, which will likely slow things down considerably.

The AMD Ryzen 9 9800X3D can support up to 192 GB of system memory, but it will need to run at a significantly slower speed (3,600 MT/sec). And as our tests have shown, slower memory can have a big impact on performance.

2) AMD recently announced two additional ‘Zen 5’ 3D V-Cache processors. It will be interesting to see how they compare. The 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D both have slightly more L3 cache (128 MB) than the 8-core Ryzen 7 9800X3D (96 MB). However, they are made up of two separate chiplets (CCDs), so communication between the cores in different CCDs could slow things down.

3) Most of the reality models we used for testing are not that big, with the exception of the Habitat 67 dataset, which we used to test certain aspects of RealityCapture. Larger datasets require more memory. For example, reconstructing the full Habitat 67 RealityCapture dataset on the 96-core Threadripper Pro 7995WX workstation used 228 GB of system memory at peak, out of the 256 GB in the machine - and took more than half a day to process. Workstations with less system memory will likely have to push some of the data into temporary swap space on the SSD. Admittedly, as modern PCIe NVMe SSDs offer very fast read-write performance, this is not necessarily the colossal

workstation special report

bottleneck it used to be when you had to swap out data to mechanical Hard Disk Drives (HDDs).

4) Multi-tasking is often important for reality modelling, as the processing of data often involves several different stages from several different sources. At any given point you may need to perform multiple operations at the same time, which can put a massive strain on the workstation. As the AMD Ryzen 7 9800X3D processor has only 8-cores and is effectively limited to 96 GB of fast system memory, if you throw more than one task at the machine at a time things will likely slow down considerably. Meanwhile Threadripper Pro is much more scalable as there are processors with 12- to 96-cores, and the platform supports

you work with, and the complexity of your workflows. For lighter tasks, the AMD Ryzen 7 9800X3D looks to be an excellent budget choice, but for more complex projects, especially those that require multi-tasking, Threadripper Pro should deliver a much more flexible and performant platform. Of course, you still need to choose between the different models, which vary in price considerably and, as we have found in some of our tests, fewer cores is sometimes better.

Thoughts on GPUs

‘‘ Two of our test workflows rely on Nvidia GPUs, but because they share some of the workload with the CPU, the performance gains from more powerful GPUs are less pronounced compared to entirely GPU-driven tasks like ray trace rendering

up to 2 TB of DDR5-5200 ECC memory. For a crude multi-tasking test, we performed two operations in parallel — alignment in RealityCapture and meshing in Leica Cyclone 3DR. The Threadripper Pro 7995WX workstation completed both tests in 200 secs, while AMD Ryzen 7 9800X3D came in second in 238 secs. We expect this lead would grow with larger datasets or more concurrent processing tasks.

In summary, your choice of processor will depend greatly on the size of datasets

Two of our tests — Reconstruction in RealityCapture and AI classification in Leica Cyclone 3DR — rely on Nvidia GPUs. However, because these processes share some of the workload with the CPU, the performance gains from more powerful GPUs are less pronounced compared to entirely GPU-driven tasks like ray trace rendering. There’s a significant price gap between the Nvidia RTX A1000 (£320) and the Nvidia RTX 6000 Ada Generation (£6,200). For reconstruction in RealityCapture, investing in the higher-end model is provably easier to justify, as our tests showed computation times could be cut in two. However, for AI classification in Leica Cyclone 3DR, the performance gains are much smaller, and there seem to be diminishing returns beyond the Nvidia RTX 2000 Ada Generation.

While larger datasets may deliver more substantial benefits, GPU memory — a key advantage of the higher-end cards — appears to be less crucial.

Point cloud in Leica Cyclone 3DR

ENGINEERED, NOT JUST ASSEMBLED IT LOOKS LIKE ART AND WORKS LIKE GRANDO GRANDO – A PRODUCT LINE BY COMINO

Comino Grando

Liquid-Cooled Silent Workstations & High-Performance Multi-GPU Servers

Designed for AI – training, fine tuning, inference, deep learning and more

Boosted in Performance by up to 50% –outperform standard air-cooled machines

Reliable in Operation within premises up to 40°C – stays cool and quiet under demanding conditions

Unique Configurations – Scale up to 8 high-end GPUs (NVIDIA RTX 6000 ADA, H200, RTX 5090)

Optimized with leading AI frameworks and inference tools – Stable Diffusion, Llama, Mid Journey, Hugging Face, PyTorch, TensorFlow, Character.AI, QuillBot, DALLE and more

Engineering as Art

Meticulously selected and engineered components maximize longevity and performance

Controller – the System’s Core Independent, autonomous monitoring ensures constant oversight and stability

Full-Cover Comino CPU Water Block

Cools both CPU and power circuitry for peak performance

Single-Slot Comino GPU Water Blocks

Uniquely designed for top efficiency and a dense compute

API Integration

Compatible with modern monitoring tools like Grafana and Zabbix

Comprehensive Sensors

Track temperatures, airflow, coolant level, flow and more for precise analysis

Compact, Modular & Easily Serviced Chassis

Quick access for minimal downtime

* GRANDO systems are compatible with EPYC, Threadrippiper, Xeon and Xeon W CPUs, NVIDIA RTX A6000, A40, RTX 6000 ADA, L40S, A100, H100, H200, RTX 3090, RTX 4090, RTX 5090, AMD Radeon PRO W7900, Radeon 7900XTX GPUs. ** Server equipped with redundant power supply system for 24/7 stable operation.

Reshuffle spells end for Dell Precision workstation brand

Dell has simplified its product portfolio, with the introduction of three new PC categories – Dell for ‘play, school and work’, Dell Pro for ‘professional-grade productivity’ and Dell Pro Max ‘for maximum performance’

The rebranding spells an end to the company’s long-standing Precision workstation brand, which will be replaced by Dell Pro Max. It also signals a move away from the term “workstation”. On Dell’s website “workstation” appears only in fine print, as the company now favours high-performance, professionalgrade PC when describing Dell Pro Max.

To those outside of Dell, however, Dell Pro Max PCs are unmistakably workstations, with ISV certification and traditional workstation-class components, including AMD Threadripper Pro processors, Nvidia RTX graphics, highspeed storage, and advanced memory.

Dell has also simplified the product tiers within each of the new PC categories. Starting with the Base level, users can upgrade to the Plus tier for more scalable performance or the Premium tier, which Dell describes as delivering the ultimate in mobility and design.

“We want customers to spend their valuable time thinking about workloads they want to run on a PC, the use cases they’re trying to solve a problem for, not what sub brand, not understanding and figuring out our nomenclature, which at times, has been a bit confusing,” said Jeff Clarke, vice chairman and COO, Dell.

To coincide with the rebrand, Dell has introduced two new base level mobile workstations – the Dell Pro Max 14 and 16 – built around Intel Core Ultra 9 (Series 2) processors and Nvidia RTX GPUs. The full portfolio with the Plus and Premium tier, including AMD options, will follow.

■ www.dell.com

Lenovo powers new workstation service

MSCAD Services has launched WaaS, a ‘Workstation as a Service’ offering, in partnership with Lenovo workstations and Equinix Data Centres.

The global service comprises private cloud solutions and rentable workstations, on a per user, per month basis. Contracts run from one to 36 months.

According to IMSCAD, the service is up to 40% cheaper than high-end instances from the public cloud, and the

workstations perform faster. Users get a 1:1 connection to a dedicated workstation featuring a CPU up to 6.0 GHz and a GPU with up to 24 GB of VRAM.

“Public cloud pricing is far too high when you want to run graphical applications and desktops,” said CEO Adam Jull. “Our new service is backed by incredible Lenovo hardware and the best remoting software from Citrix, Omnissa (formally VMware Horizon) and TGX to name a few.”

■ www.imscadservices.com

Nvidia unveils ‘Blackwell’ RTX GPUs

N

vidia has the consumer-focused RTX 50-Series line up of Blackwell GPUs.

The flagship GeForce RTX 5090 comes with 32 GB of GDDR7 memory, which would suggest that professional Blackwell Nvidia RTX boards, which are expected to follow soon, could go above the current max of 48 GB of the Nvidia RTX 6000 Ada Gen.

■ www.nvidia.com

HP to launch 18-inch mobile workstation

HP is gearing up for the Spring 2025 launch of its first-ever 18-inch mobile workstation, which has been engineered to provide up to 200W TDP to deliver more power for next-generation discrete graphics.

The laptop will feature ‘massive memory and storage’, will be nearly the same size as a 17” mobile workstation and will be cooled by 3x turbo fans and HP Vaporforce Thermals.

■ www.hp.com/z

Nvidia reveals AI workstation

N

vidia has announced Project Digits, a tiny system designed to allow AI developers and researchers to prototype large AI models on the desktop.

The ‘personal AI supercomputer’ is powered by the GB10 Grace-Blackwell, a shrunk down version of the Armbased Grace CPU and Blackwell GPU system-on-a-chip (SoC) .

■ www.nvidia.com

DEVELOP3DSERVICES

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.