AEC Magazine workstation special report Winter 2025

Page 1


Workstation special report

Winter 2025

Model behaviour

What’s the best CPU, memory and GPU to process complex reality modelling data?

Vs

The integrated GPU comes of age

From desktop to datacentre, could the AMD Ryzen AI Max Pro ‘Strix Halo’ processor change the face of workstations?

JAMES GRAY

Intel vs AMD

Intel Core Ultra vs AMD Ryzen

9000 Series in CAD, BIM, reality modelling, viz and simulation

The AI enigma

Do you need an AI workstation?

+ how to choose a GPU for Stable Diffusion

The AI enigma

AI has quickly been woven into our daily workflows, leaving its mark on nearly every industry. For design, engineering, and architecture firms, the direction in which some software developers are heading raises important questions about future workstation investments, writes Greg Corke

You can’t go anywhere these days without getting a big AI smack in the face. From social media feeds to workplace tools, AI is infiltrating nearly every part of our lives, and it’s only going to increase. But what does this mean for design, engineering, and architecture firms? Specifically, how should they plan their workstation investments to prepare for an AI-driven future?

AI is already here

The first thing to point out is if you’re into visualisation — using tools like Enscape, Twinmotion, KeyShot, V-Ray, D5 Render or Solidworks

Visualize, there’s a good chance your workstation is already AI-capable. Modern GPUs, such as Nvidia RTX and AMD Radeon Pro, are packed with special cores designed for AI tasks.

‘‘ Desktop software isn’t going away anytime soon, so firms could end up paying twice – once for the GPUs in their workstations and again for the GPUs in the cloud ’’

Features such as AI denoising, DLSS (Deep Learning Super Sampling), and more are built into many visualisation tools. This means you’re probably already using AI whether you realise it or not.

It’s not just these tools, however. For concept design, text-to-image AI software like Stable Diffusion can run locally on your workstation (see page WS30). Even in reality modelling apps, like Leica Cyclone 3DR, AI-powered features such as autoclassification are now included, requiring a Nvidia CUDA GPU (see page WS34)

Don’t forget Neural Processing Units (NPUs) – new hardware accelerators designed specifically for AI tasks. These are mainly popping up in laptop processors, as they are energy-efficient so can help extend battery life. Right now, NPUs are mostly used for general AI tasks, such as to power AI assistants or to blur

backgrounds during Teams calls, but design software developers are starting to experiment too.

Cloud vs desktop

While AI is making its mark on the desktop, much of its future lies in the cloud. The cloud brings unlimited GPU processing power, which is perfect for handling the massive AI models that are on the horizon. The push for cloud-based development is already in full swing – just ask any software startup in AEC or product development how hard it is to get funded if their software doesn’t run in a browser.

Established players like Dassault Systèmes and Autodesk are also betting big on the cloud. For example, users of CAD software Solidworks can only access new AI features if their data is stored and processed on the Dassault Systèmes 3D Experience Platform. Meanwhile, Autodesk customers will need to upload their data to Autodesk Docs to fully unlock future AI functionality, though some AI inferencing could still be done locally.

While the cloud is essential for some AI workflows, not least because they involve terabytes of centralised data, not every AI calculation needs to be processed off premise. Software developers can choose where to push it. For example, when Graphisoft first launched AI Visualizer, based on Stable Diffusion, the AI processing was done locally on Nvidia GPUs. Given the software worked alongside Archicad, a desktop BIM tool, this made perfect sense. But Graphisoft then chose to shift processing entirely to the cloud, and users must now have a specific license of Archicad to use this feature.

The double-cost dilemma

Desktop software isn’t going away anytime soon. With tools like Revit and Solidworks installed in the millions – plus all the viz tools that work alongside them — workstations with powerful AI-capable GPUs will remain essential for many workflows for years to come. But here’s the issue: firms could end up paying twice — once for the GPUs in their workstations and again for the GPUs in the cloud. Ideally, software developers should give users some flexibility where possible. Adobe provides a great example of this with Photoshop, letting users choose whether to run certain AI features locally or in the cloud. It’s all about what works best for their setup — online or offline. Sure, an entry-level GPU might be slower, but that doesn’t mean you’re stuck with what’s in your machine. With technologies like Z by HP Boost (see page WS32), local workstation resources can even be shared.

But the cloud vs desktop debate is not just about technology. There’s also the issue of intellectual property (IP). Some AEC firms we’ve spoken with won’t touch the cloud for generative AI because of concerns over how their confidential data might be used.

I get why software developers love the cloud — it simplifies everything on a single platform. They don’t have to support a matrix of processors from different vendors. But here’s the problem: that setup leaves perfectly capable AI processors sat idle on the desks of designers, engineers, and architects, when they could be doing the heavy lifting. Sure, only a few AI processes rely on the cloud now, but as capabilities expand, the escalating cost of those GPU hours will inevitably fall on users, either through pay-per-use charges or hidden within new subscription models. At a time when software license costs are already on the rise, adding extra fees to cover AWS or Microsoft Azure expenses would be a bitter pill for customers to swallow.

Cover story The integrated GPU comes of age

With the launch of the AMD Ryzen AI Max Pro ‘Strix Halo’ processor, AMD has changed the game for integrated GPUs, delivering graphics performance that should rival that of a mid-range discrete GPU. Greg Corke explores the story behind this brand-new chip and what it might mean for CAD,BIM, viz and more

For years, processors with integrated GPUs (iGPUs) — graphics processing units built into the same silicon as the CPU — have not been considered a serious option for 3D CAD, BIM, and especially visualisation — at least by this publication.

Such processors, predominantly manufactured by Intel, have generally offered just enough graphics performance to enable users to manipulate small 3D models smoothly within the viewport. However, until recently, Intel has not demonstrated anywhere near the same level of commitment to pro graphics driver optimisation and software certification as the established players – Nvidia and AMD.

This gap has limited the appeal of all-in-one-processors for demanding professional workflows, leaving the combination of discrete pro GPU (e.g. Nvidia Quadro / RTX and AMD Radeon Pro) and separate CPU (Intel Core) as the preferred choice of most architects, engineers and designers.

A seed for progress

Things started to change in 2023, when AMD introduced the ‘Zen 4’ AMD Ryzen Pro 7000 Series, a family of laptop processors with integrated Radeon GPUs capable of going toe to toe with entry-level discrete GPUs in 3D performance.

What’s more, AMD backed this up with the same pro graphics drivers that it uses for its discrete AMD Radeon Pro GPUs.

The chip family was introduced to the workstation sector by HP and Lenovo in compact, entry-level mobile workstations. In a market long dominated by Intel processors, securing two out of three major workstation OEMs was a major coup for AMD.

In 2024, both OEMs then adopted the slightly improved AMD Ryzen Pro 8000 Series processor and launched new 14-inch mobile workstations – the HP ZBook Firefly G11 A and Lenovo ThinkPad P14s Gen 5 –which we review on pages WS8 and WS9

Both laptops are an excellent choice for 3D CAD and BIM workflows and having tested them extensively, it’s fair to say we’ve been blown away by the

capabilities of the AMD technology.

The flagship AMD Ryzen 9 Pro 8945HS processor with integrated AMD Radeon 780M GPU boasts graphics performance that genuinely rivals that of an entry-level discrete GPU. For instance, in Solidworks 3D CAD software, it smoothly handles a complex 2,000-component motorcycle assembly in “shaded with edges” mode.

However, the AMD Ryzen Pro 8000 Series processor is not just about 3D performance. What truly makes the chip stand out is the ability of the iGPU to access significantly more memory than a typical entry-level discrete GPU. Thanks to AMD’s shared memory architecture — refined over years of developing integrated processors for Xbox and PlayStation gaming consoles — the GPU has direct and fast access to a large, unified pool of system memory.

Up to 16 GB of the processor’s maximum 64 GB can be reserved for the GPU in the BIOS. If memory is tight and you’d rather not allocate as much to the GPU, smaller profiles from 512 MB to 8 GB can be selected. Remarkably, if the GPU runs out of its ringfenced memory, it seamlessly borrows additional system memory if available, temporarily expanding its capacity. Since this memory resides in the same physical location, access remains fast, and real-time performance in 3D CAD and BIM tools typically only drops by a few frames per second, maintaining that all-important smooth experience within the viewport.

In contrast, when a discrete GPU runs out of memory, it can have a big impact on 3D performance. Frame rates can fall dramatically, often making it very hard to re-position a 3D model in the viewport. While a discrete GPU can also ‘borrow’ from system memory, it must access it over the PCIe bus, which is much slower.

All of this means the AMD Ryzen Pro 8000 Series processor can handle certain workflows that simply aren’t possible with an entry-level discrete GPU, especially one with only 4 GB of onboard VRAM.

To put this into a real-world workflow context: with our HP ZBook Firefly G11 A configured with 64 GB of system RAM, Solidworks Visualize was able to grab

the 20 GB of GPU memory it needed to render a complex scene at 8K resolution. What’s even more impressive is that while Solidworks Visualize rendered in the background, we could continue working on the 3D design in Solidworks CAD without disruption.

While the amount of addressable memory makes workflows like these possible, the AMD Radeon 780M GPU does not really have enough graphics horsepower to deliver sufficient frames rates in real-time viz software such as Twinmotion, Enscape, and D5 Render.

For that you need a more powerful GPU, which is exactly what AMD has delivered in its new AMD Ryzen AI Max Pro ‘Strix Halo’ processor, which it announced this month.

AMD Ryzen AI Max Pro

The AMD Ryzen AI Max Pro will be available first in HP Z workstations, but unlike the AMD Ryzen Pro 8000 Series processor it’s not just restricted to laptops. In addition to the HP ZBook Ultra G1a mobile, HP has launched a micro desktop, the HP Z2 Mini G1a (see box out on page WS6). Although we haven’t had the chance to test these exciting new chips first hand, our experience with the AMD Ryzen Pro 8000 Series processor and the published specifications of the AMD Ryzen AI Max Pro series give us a very good idea of what to expect.

In the top tier model, the AMD Ryzen AI Max+ Pro 395, the integrated Radeon 8060S GPU is significantly more powerful than the Radeon 780M GPU in the Ryzen 9 Pro 8945HS processor.

It features 40 RDNA 3.5 graphics compute units — more than three times the 12 RDNA 3.0 compute units on offer in the 780M. This should make it capable of handling some relatively demanding workflows for real time visualisation.

But raw graphics performance only tells part of the story. The new Ryzen AI Max Pro platform can support up to 128 GB of 8,000MT/s LPDDR5X memory, and up to 96 GB of this can be allocated exclusively to the GPU. Typically, such vast quantities of GPU memory are only

available in extremely powerful and expensive cloud-based GPUs. It’s the equivalent to the VRAM in two high-end desktop-class workstation GPUs, such as the Nvidia RTX 6000 Ada Generation.

Reports suggest the Ryzen AI Max Pro will rival the graphics performance of an Nvidia RTX 4070 laptop GPU, the consumer equivalent of the Nvidia RTX 3000 Ada Gen workstation laptop GPU.

However, while the Nvidia GPU comes with 8 GB of fixed VRAM, the Radeon 8060S GPU can scale much higher. And this could give AMD an advantage when working with very large models, particularly in real time viewports, or when multitasking.

Of course, while the GPU can access what is, quite frankly, an astonishing amount of memory, there will still be practical limits to the size of visualisation models it can handle. With patience, while you could render massive scenes in the background, don’t expect seamless navigation of these models in the viewport, particularly at high resolutions. For that level of 3D performance, a high-end dedicated GPU will almost certainly still be necessary.

The competitive barriers

workstation

software Leica Cyclone 3DR, for example, AI classification is built around the Nvidia CUDA platform (see page WS34).

The good news is AMD is actively collaborating with ISVs to broaden support for AMD GPUs, porting code from Nvidia CUDA to AMD’s HIP framework, and some have already announced support. For example, CAD-focused rendering software, KeyShot Studio, now works with AMD Radeon for GPU rendering, as Henrik Wann Jensen, chief scientist, KeyShot, explains. “We are particularly excited about the substantial frame buffer available on the Ryzen AI Max Pro.” Meanwhile, Altair, a specialist in simulation-driven design, has also announced support for AMD Radeon GPUs on Altair Inspire, including the AMD Ryzen AI Max Pro.

Artificial Intelligence (AI)

These days, no new processor is complete without an AI story, and the AMD Ryzen AI Max Pro is no exception.

First off, the processor features an XDNA2-powered Neural Processing Unit (NPU), capable of dishing out 50 TOPS of AI performance, meeting Microsoft’s requirements for a CoPilot+ PC. This capability is particularly valuable for laptops, where it can accelerate simple AI tasks such as AutoFrame, Background Blur, and virtual backgrounds for video conferencing, more efficiently than a GPU, helping to extend battery life.

While 50 TOPS NPUs are not uncommon, it’s the amount of memory that the NPU and GPU can address that makes the AMD Ryzen AI Max Pro particularly interesting for AI.

‘‘ AMD is pushing the message that users no longer need to rely on a separate CPU and GPU. Could this mark the beginning of a decline in entrylevel to mid-range professional discrete GPUs? ’’

The AMD Ryzen AI Max Pro looks to bring impressive new capabilities, but it doesn’t come without its challenges. In general, AMD GPUs lag behind Nvidia’s when ray tracing, a rendering technique which is becoming increasingly popular in real time arch viz tools.

Additionally, some AEC-focused independent software vendors (ISVs) depend on Nvidia GPUs to accelerate specific features. In reality modelling

AMD isn’t just playing catchup with Nvidia; it’s also paving the way for innovations in software development. According to Rob Jamieson, senior industry alliance manager at AMD, traditional GPU computation often requires duplicating data — one copy in system memory and another in GPU memory — that must stay in sync. AMD’s shared memory architecture changes the game by enabling a ‘zero copy’ approach, where the CPU and GPU can read from and write to a single data source. This approach not only has the potential to boost performance by not having to continually copy data back and forth, but also reduce overall memory footprint, he says.

HP Z2 Mini G1a desktop workstation

HP is billing the HP Z2 Mini G1a with AMD Ryzen AI Max Pro processor as the world’s most powerful mini workstation, claiming that it can tackle the same workflows that previously required a much larger desktop workstation. On paper, much of this claim appears to be down to the amount of memory the GPU can address as HP’s Intelbased equivalent, the HP Z2 Mini G9, is limited to low profile GPUs, up to the 20 GB Nvidia RTX 4000 SFF Ada.

The HP Z2 Mini G1a also supports slightly more system

memory than the Intel-based HP Z2 Mini G9 (128 GB vs 96 GB), although some of that memory will need to be allocated to the GPU. System memory in the HP Z2 Mini G1a is also significantly faster (8,000 MT/s vs 5,600 MT/s), which will benefit certain memory intensive workflows in areas including simulation and reality modelling.

While the HP Z2 Mini G9 can support CPUs with a similar number of cores — up to the Intel Core i9-13900K (8 P-cores and 16 E-cores) — our past tests have shown that multi-core frequencies drop considerably under heavy

sustained loads. It will be interesting to see if the energyefficient AMD Ryzen AI Max Pro processor can maintain higher clock speeds across its 16-cores.

Perhaps the most compelling use case of the HP Z2 Mini G1a will be when multiple units are deployed in a rack, as a centralised remote workstation resource.

With the HP Z2 Mini G9, both the power supply and the HP Anyware Remote System Controller, which provides

According to AMD, having access to large amounts of memory allows the processor to handle ‘incredibly large, highprecision AI workloads’, referencing the ability to run a 70-billion parameter large language model (LLM) 2.2 times faster than a 24 GB Nvidia GeForce RTX 4090 GPU.

While edge cases like these show great promise, software compatibility will be a key factor in determining the success of the chip for AI workflows. One can’t deny that Nvidia currently holds a commanding lead in AI software development.

On a more practical level for architects and designers, the chip’s ability to handle large amounts of memory could offer an interesting proposition for AI-driven tools like Stable Diffusion, a text-to-image generator that can be used for ideation at the early stages of design (see page WS30).

remote ‘lights out’ management capabilities, were external. With the new HP Z2 Mini G1a the PSU is now fully integrated in the slightly smaller chassis, which should help increase density and airflow. Five HP Z2 Mini G1a workstations can be placed side by side in a 4U space.

workstation special report

Beyond the GPU

While it’s natural to be drawn to the GPU — being far more powerful than any iGPU that has come before — the AMD Ryzen AI Max Pro doesn’t exactly hold back when it comes to the CPU. Compared to the AMD Ryzen Pro 8000 Series processor, the core count is doubled, boasting up to 16 ‘Zen 5’ cores. This means it should significantly outperform the eight ‘Zen 4’ cores of its predecessor in multi-threaded workflows like rendering.

On top of that, the AMD Ryzen AI Max Pro platform supports much faster memory — 8,000MT/s LPDDR5X compared to DDR5-5600 on the AMD Ryzen Pro 8000 Series — so memory-intensive workflows like simulation and reality modelling should get an additional boost.

Laptop, desktop and datacentre

One of the most interesting aspects of the AMD Ryzen AI Max Pro is that it is being deployed in laptops and micro desktops. This also extends to datacentres as well, as the HP Z2 Mini G1a desktop is designed from the ground up to be rackable.

While the HP Z2 Mini G1a and HP ZBook Ultra G1a use the exact same silicon, which features a configurable Thermal Design Power (cTDP) of 45W – 120W, performance could vary significantly between the two devices. This is down to the amount of power that each workstation can draw.

The power supply in the HP Z2 Mini G1a desktop is rated at 300W—more than twice the 140W of the HP ZBook Ultra G1a laptop. While users shouldn’t notice any difference in single threaded or lightly threaded workflows like CAD or BIM, we expect performance in multi-threaded tasks, and possibly graphics-intensive tasks, to be superior on the desktop unit.

However, that still doesn’t mean the HP Z2 Mini G1a will get the absolute best

out of the processor. It remains to be seen what clock speeds the AMD Ryzen AI Max Pro processor will be able to maintain across its 16-cores, especially in highly multi-threaded workflows like rendering.

Conclusion

The AMD Ryzen AI Max Pro processor has the potential to make a significant impact in the workstation sector. On the desktop, AMD has already disrupted the high-end workstation space with its Threadripper Pro processors, severely impacting sales of Intel Xeon. Now, the company aims to bring this success to mobile and micro desktop workstations, with the promise of significantly improved graphics with buckets of addressable memory.

AMD is pushing the message that users no longer need to rely on a separate CPU and GPU. However, overcoming the long-standing perception that iGPUs are not great for 3D modelling is no small challenge, leaving AMD with significant work to do in educating the market. If AMD succeeds, could this mark the beginning of a decline in entry-level to mid-range professional discrete GPUs?

Much will also depend on cost. Neither AMD nor HP has announced pricing yet, but it stands to reason that a single chip solution should be more cost-effective than having two separate components.

Meanwhile, while the new chip promises impressive performance in all the right areas, that’s only one part of the equation. In the workstation sector, AMD’s greater challenge arguably lies in software. To compete effectively, the company needs to collaborate more closely with select ISVs to enhance compatibility and reduce reliance on Nvidia CUDA. Additionally, optimising its graphics drivers for better performance in certain professional 3D applications remains a critical area for improvement.

HP ZBook Ultra G1a mobile workstation

HP is touting the HP ZBook

Ultra G1a with AMD Ryzen

AI Max Pro processor as the world’s most powerful 14inch mobile workstation. It offers noteworthy upgrades over other 14-inch models, including double the number of CPU cores, double the system memory, and substantially improved graphics.

When compared to the considerably larger and heavier 16-inch HP ZBook Power

G11 A—equipped with an AMD Ryzen 9 8945HS processor and Nvidia RTX 3000 Ada laptop

GPU—HP claims the HP ZBook

Ultra G1a with an AMD Ryzen AI Max Pro 395 processor and Radeon 8060S GPU, delivers significant performance gains. These include 114% faster CPU rendering in Solidworks and 26% faster graphics performance in Autodesk 3ds Max. The HP ZBook Ultra G1a isn’t just about performance. HP claims it’s the thinnest ZBook ever, just 18.5mm thick and weighing as little as 1.50kg. The HP Vaporforce thermal system incorporates a vapour chamber with large dual turbo fans, expanded rear ventilation, and a newly designed hinge that

The competition

AMD is not the only company developing processors with integrated GPUs. Intel has made big strides in recent years, and the knowledge it has gained in graphics hardware and pro graphics drivers from its discrete Intel Arc Pro GPUs is now starting to trickle through to its Intel Core Ultra laptop processors. Elsewhere, Qualcomm’s Snapdragon chips with Armbased CPU cores, have earned praise for their enviable blend of performance and power efficiency. However, there is no indication that any of the major OEMs are considering this chip for workstations and while x86 Windows apps are able to run on Arm-based Windows, ISVs would need to make their apps Arm-native to get the best performance.

Nvidia is also rumoured to be developing an Armbased PC chip, but would face similar challenges to Qualcomm on the software front.

Furthermore, while the Ryzen AI Max Pro is expected to deliver impressive 3D performance in CAD, BIM, and mainstream real-time viz workflows, its ray tracing capabilities may not be as remarkable. And for architecture and product design, ray tracing is arguably more important than it is for games.

Ultimately, the success of the AMD Ryzen AI Max Pro will depend on securing support from the other major workstation OEMs. So far, there’s been no official word from Lenovo or Dell, though Lenovo continues to offer the AMD Ryzen Pro 8000-based ThinkPad P14s Gen 5 (AMD), which is perfect for CAD, and Dell has announced plans to launch AMD-based mobile workstations later this year. AMD seems prepared to play the long game, much like it did with Threadripper Pro, laying the groundwork for future generations of processors with even more powerful integrated GPUs. We look forward to putting the AMD Ryzen AI Max Pro through its paces soon.

improves airflow. According to HP, this design boosts performance while keeping surface temperatures cooler and fan noise quieter.

HP is expecting up to 14 hours of battery life from the HP XLLong Life 4-cell, 74.5 Wh polymer battery. The device is paired with either a 100 W or 140 W USB Type-C slim adapter for charging. For video conferencing, the laptop features a 5 MP IR camera with Poly Camera Pro software. Advanced features like AutoFrame, Spotlight, Background Blur, and virtual backgrounds are all powered

by the 50 TOPS NPU, optimising power efficiency.

Additional highlights include a range of display options, with the top-tier configuration offering a 2,880 x 1,800 OLED panel (400 nits brightness, 100% DCI-P3 colour gamut), HP Onlooker detection that automatically blurs the screen if it detects that someone is peeking over your shoulder, up to 4 TB of NVMe TLC SSD storage, and support for Wi-Fi 7.

Review: HP ZBook

Firefly 14 G11 A

This pro laptop is a great all rounder for CAD and BIM, offering an enviable blend of power and portability in a solid, wellbuilt 14-inch chassis, writes Greg Corke

Afew years back, HP decided to simplify its ZBook mobile workstation lineup. With so many different models, and inconsistent product names, it was hard to work out what was what.

HP’s response was to streamline its offerings into four primary product lines: the HP ZBook Firefly (entry-level), ZBook Power (mid-range), ZBook Studio (slimline mid-range), and ZBook Fury (high-end). HP has just added a fifth—the ZBook Ultra—powered by the new AMD Ryzen AI Max Pro processor.

The ZBook Firefly is the starter option, intended for 2D and light 3D workflows, with stripped back specs. Available in both 14-inch and 16-inch variants, customers can choose between Intel or AMD processors. While the Intel Core Ultra-

based ZBook Firefly G11 is typically paired with an Nvidia RTX A500 Laptop GPU, the ZBook Firefly G11 A — featured in this review — comes with an AMD Ryzen 8000 Series ‘Zen 4’ processor with integrated Radeon graphics.

Weighing just 1.41 kg, and with a slim aluminium chassis, the 14-inch ZBook Firefly G11 A is perfect for CAD and BIM onthe-go. But don’t be fooled by its sleek design — this pro laptop is built to perform.

Product spec

■ AMD Ryzen 9 Pro 8945HS processor (4.0 GHz base, 5.2 GHz max boost) (8-cores) with integrated AMD Radeon 780M GPU

■ 64 GB (2 x 32 GB) DDR5-5600 memory

■ 1 TB, PCIe 4.0 M.2 TLC SSD

■ 14-inch WQXGA (2,560 x 1,600), 120 Hz, IPS, antiglare, 500 nits, 100% DCI-P3, HP DreamColor display

Powered by the flagship AMD Ryzen 9 Pro 8945HS processor, our review unit handled CAD and BIM workflows like a champ, even when working with some relatively large 3D models. The integrated AMD Radeon 780M graphics delivered a smooth viewport in Revit and Solidworks, except with our largest assemblies, but showed its limitations in real-time viz. In Twinmotion, with the mid-sized Snowden Tower Sample project, we recorded a mere 8 FPS at 2,560 x 1,600 resolution. While you wouldn’t ideally want to work like this day in day out, it’s passable if you just want to set up some scenes to render, which it does pretty quickly thanks to its scalable GPU memory (see box out below).

■ 316 x 224 x 19.9 mm (w/d/h)

■ From 1.41 kg

■ Microsoft Windows 11 Pro

■ 1 year (1/1/0) limited warranty includes 1 year of parts and labour. No on-site repair.

■ £1,359 (Ex VAT) CODE: 8T0X5EA#ABU

■ www.hp.com/z

On the CPU side, the frequency in single threaded workflows peaked at 4.84 GHz. In our Revit and Solidworks benchmarks, performance was only between 25% to 53% slower than the current fastest desktop processor, the AMD Ryzen 9 9950X, with the newer ‘Zen 5’ cores. Things were equally impressive in multi-threaded workflows. When rendering in V-Ray, for example,

it delivered 4.1 GHz across its 8 cores, 0.1 GHz above the processor’s base frequency. Amazingly, it maintained this for hours, with minimal fan noise. With a compact 65W USB-C power supply, the laptop is relatively low-power.

The HP DreamColor WQXGA (2,560 x 1,600) 16:10 120Hz IPS display with 500 nits of brightness is a solid option. It delivers super-sharp detail for precise CAD work and good colours for visualisation. There are several alternatives, including a WUXGA (1,920 x 1,200) anti-glare IPS panel, with 100% sRGB coverage and a remarkable 1,000 nits, but no OLED options, as you’ll find in other HP ZBooks and the Lenovo ThinkPad P14s (AMD) . Under the hood, the laptop came with a 1 TB NVMe SSD and 64 GB of DDR5-5600 memory, the maximum capacity of the machine. This is possibly a tiny bit high for mainstream CAD and BIM workflows, but bear in mind some of it needs to be allocated to graphics. Other features include fast Wi-Fi 6E, and an optional 5MP camera with privacy shutter and HP Auto Frame technology that helps keep you in focus during video calls.

There’s much to like about the HP ZBook Firefly G11 A. It’s very cost-effective, especially as it’s currently on offer at £1,359 with 1-year warranty, but there’s nothing cheap about this excellent mobile workstation. It’s extremely well-built, quiet in operation and offers an enviable blend of power and portability. All of this makes it a top pick for users CAD and BIM software, with a sprinkling of viz on the top.

What does the AMD Radeon 780M GPU offer for 3D design?

Integrated graphics no longer means designers must compromise on performance. As detailed in our cover story, “The integrated GPU comes of age” (see page WS4), the AMD Ryzen 8000 Series processor impresses. It gives the HP ZBook Firefly 14 G11 A and Lenovo ThinkPad P14s Gen 5 mobile workstations enough graphics horsepower for entry-level CAD and BIM workflows, while also allowing designers, engineers and architects to dip their toes into visualisation. Take a complex motorcycle

assembly in Solidworks CAD software, for example — over 2,000 components, modelled at an engineering level of detail. With the AMD Ryzen 9 Pro 8945HS processor with AMD Radeon 780M integrated graphics our CAD viewport was perfectly smooth in shaded with edges display mode, hitting 31 Frames Per Second (FPS) at FHD resolution and 27 FPS at 4K. Enabling RealView, for realistic materials, shadows, and lighting, dialled back the realtime performance a little, with frame rates dropping to 14–16 FPS. Even though that’s below

the golden 24 FPS, it was still manageable, and repositioning the model felt accurate, with no frustrating overshooting.

The processor’s trump card is the ability of the built in GPU to address lots of memory. Unlike comparative discrete GPUs, which are fixed with 4 GB or 8 GB, the integrated AMD Radeon GPU can be assigned a lot more, taking a portion of system memory. In the BIOS of the HP ZBook Firefly 14 G11 A, one can choose between 512 MB, 8 GB or 16 GB, so long as the laptop has system memory to spare, taken

Review: Lenovo ThinkPad P14s (AMD)

This 14-inch mobile workstation stands out for its exceptional serviceability featuring several customer-replaceable components, writes Greg Corke

The ThinkPad P14s Gen 5 (AMD) is the thinnest and lightest mobile workstation from Lenovo — 17.71mm thick and starting at 1.31kg. It’s a true 14-incher, smaller than the ThinkPad P14s Gen 5 (Intel), which has a slightly larger 14.5-inch display.

The chassis is quintessential ThinkPad — highly durable, with sturdy hinges and an understated off-black matte finish. The keyboard feels solid, complemented by a multi-touch TrackPad with a pleasingly smooth Mylar surface. True to tradition, it also comes with the ThinkPad-standard TrackPoint with its three-button

from its maximum of 64 GB. 8 GB is sufficient for most CAD workflows, but the 16 GB profile can benefit design visualisation as it allows users to render more complex scenes at higher resolutions than typical entrylevel discrete GPUs. This was demonstrated perfectly in arch viz software Twinmotion from Epic Games. With the mid-sized Snowden Tower Sample project, the AMD Radeon 780M integrated graphics in our HP ZBook Firefly G11 A took 437 secs to render out six 4K images, using up to 21 GB of GPU memory in the

setup. We’ve yet to meet anyone who actually uses this legacy pointing device, but removing it would likely spark outrage among die-hard fans. Meanwhile, the fingerprint reader is seamlessly integrated into the power button for added convenience.

The laptop stands out for its impressive serviceability, allowing the entire device to be disassembled and reassembled using basic tools — just a Phillips head screwdriver is needed to remove back panel.

Product spec

■ AMD Ryzen 7 Pro 8840HS processor (3.3 GHz base, 5.1 GHz max boost) (6-cores) with integrated AMD Radeon 760M GPU

■ 32 GB (2 x 16 GB) DDR5-5600 memory

■ 512 GB, PCIe 4.0 M.2 SSD

■ 14-inch WUXGA (1,920 x 1,200) IPS display with 400 nits

■ 316 x 224 x 17.7 mm (w/d/h)

■ From 1.31 kg

■ Microsoft Windows 11 Pro

less powerful integrated GPU compared to the flagship 45W AMD Ryzen 9 Pro 8945HS.

The machine performed well in Solidworks (CAD) and Revit (BIM), but unsurprisingly came in second to the HP ZBook Firefly in all our benchmarks. The margins were small, but became more noticeable in multi-threaded workflows, especially rendering. On the plus side, the P14s was slightly quieter under full load.

■ 3 Year Premier Support

■ £1,209 (Ex VAT)

■ www.lenovo.com

Our review unit’s 14-inch WUXGA (1,920 x 1,200) IPS display is a solid, if not stand out option, offering 400 nits of brightness. One alternative is a colour-calibrated 2.8K (2,880 x 1,800) OLED screen — also 400 nits, but with 100% DCI-P3 and 120Hz refresh.

It offers a range of customerreplaceable components, including the battery (39.3Wh or 52.5Wh options), M.2 SSD, and memory DIMMs, which thankfully aren’t soldered onto the motherboard. Beyond that, you can swap out the keyboard, trackpad, speakers, display, webcam, fan/heatsink assembly, and more.

assembly, and more.

The keyboard deserves a special mention need to dismantle the laptop from below.

The keyboard deserves a special mention for its top-loading design, eliminating the need to dismantle the laptop from below. Simply remove two clearly labelled screws from the bottom panel, and the keyboard pops off from the top.

The 5.0 MP webcam with IR and privacy shutter is housed in a slight protrusion at the top of the display. While this design was necessary to accommodate the higher-resolution camera (an upgrade from the Gen 4), it also doubles as a convenient handle when opening the lid.

(8 cores). Both have a Thermal Design Power (TDP) of 28W. Lenovo has chosen

There’s a choice of two AMD Ryzen 8000 Series processors: the Ryzen 5 Pro 8640HS (6 cores) and the Ryzen 7 Pro 8840HS (8 cores). Both have a Thermal Design Power (TDP) of 28W. Lenovo has chosen not to support the more powerful 45W models, likely due to thermal and power considerations. 45W models are available in the HP ZBook Firefly G11 A. Our review unit came with the entry-level Ryzen 5 Pro 8640HS. While capable, it has slightly lower clock speeds, two fewer cores, and a

Additional highlights include up to 96 GB of DDR5-5600 memory, Wi-Fi 6E, a hinged ‘drop jaw’ Gigabit Ethernet port, 2 x USB-A and 2 x USB-C. It comes with a compact 65 W USB-C power supply.

Overall, the ThinkPad P14s Gen 5 stands out as a reliable performer for CAD and BIM, offering an impressive blend of serviceability and thoughtful design.

While capable, it has slightly lower clock speeds, two fewer cores, and a

process (16 GB of dedicated and 5 GB of shared). In contrast, discrete desktop GPUs with only 8 GB of memory, took significantly longer. It seems the Nvidia RTX A1000 (799 secs) and AMD Radeon W7600 (688 secs) both pay a big penalty when they run out of their fixed on-board supply and have to borrow more from system memory over the PCIe bus, which is much slower. Of course, all eyes are on AMD’s new Ryzen AI Max Pro processor. It features significantly improved graphics, and a choice of 6, 8, 12 or 16 ‘Zen 5’ CPU cores — up to twice

as many as the 8 ‘Zen 4’ cores in the AMD Ryzen 8000 Series. However, AMD’s new silicon star in waiting won’t be available until Spring 2025, which is when HP plans to ship the ZBook Ultra G1a mobile workstation. Pricing also remains under wraps.

As we wait to see how AMD’s new chips sit in the market, the HP ZBook Firefly 14 G11 A and Lenovo ThinkPad P14s Gen 5 continue to shine as excellent options for a variety of CAD and BIM workflows — offering impressive performance at very appealing price points.

In an era where manufacturers often prioritise ‘thinner and lighter’ over repairability, it’s great to see Lenovo bucking this trend, a move that is sure to resonate with right-to-repair advocates.

AMD Ryzen 9000 vs Intel Core Ultra 200S

for CAD, BIM, rendering, simulation, and reality modelling

AMD is dominating the high-end workstation market with Threadripper Pro. But how does it fare in the mainstream segment, a traditional stronghold for Intel? Greg Corke pits the AMD Ryzen 9000 Series against the Intel Core Ultra 200S to find out

After years of playing second fiddle, AMD is now giving Intel a serious run for its money. In high-end workstations, AMD Ryzen Threadripper Pro dominates Intel Xeon in most real-world benchmarks. The immensely powerful multi-core processor now plays a starring role in the portfolios of all the major workstation OEMs.

But what about the mainstream workstation market? Here, Intel has managed to maintain its dominance with Intel Core. Despite facing stiff competition from the past few generations of AMD Ryzen processors, none of HP, Dell nor Lenovo have backed AMD’s volume desktop chip with any real conviction.

That’s not the case with specialist workstation manufacturers, however. For some time now, AMD Ryzen has featured strongly in the portfolios of Boxx, Scan, Armari, Puget Systems and others.

But the silicon sector moves fast. Intel and AMD recently launched new mainstream processors — the AMD Ryzen 9000 Series and Intel Core Ultra 200S Series. Both chip families are widely available from specialist workstation manufacturers, which are much more agile when it comes to introducing new tech. We’ve yet to see any AMD Ryzen 9000 or Intel Core Ultra 200S Series

workstations from the major OEMs. However, that’s to be expected as their preferred enterprise-focused variants — AMD Ryzen Pro and Intel Core vPro — have not launched yet.

AMD Ryzen 9000 Series “Zen 5”

The AMD Ryzen 9000 Series desktop processors, built on AMD’s ‘Zen 5’ architecture, launched in the second half of 2024 with 6 to 16 cores. AMD continues to use a chiplet-based design, where multiple CCDs (Core Complex Dies) are connected together to form a single, larger processor. The 6 and 8-core models are made from a single CCD, while the 12 and 16-core models comprise two CCDs.

The new Ryzen processors continue to support simultaneous multi-threading (SMT), AMD’s equivalent to Intel HyperThreading, which enables a single physical core to execute multiple threads simultaneously. This can help boost performance in certain multi-threaded workflows, such as ray trace rendering, but it can also slow things down. DDR5 memory is standard, up to a maximum of 192 GB. However, the effective data rate (speed) of the memory, expressed in mega transfers per second (MT/s), can vary dramatically depending on the amount of memory installed in your workstation. For example, you can

currently get up to 96 GB at 5,600 MT/s, but if you configure the workstation with 128 GB, the speed will drop to 3,600 MT/s. Some motherboards can support even faster 8,000 MT/s memory, though this is currently limited to 48 GB.

All Ryzen 9000 Series processors come with integrated GPUs, but their performance is limited, making an add-in GPU essential for professional 3D work. They do not include an integrated neural processing unit (NPU) for AI tasks.

The Ryzen 9000 Series features two classes of processors: the standard Ryzen models, denoted by an X suffix and the Ryzen X3D variants which feature AMD 3D V-Cache technology.

There are four standard Ryzen 9000 Series models. The top-end AMD Ryzen 9 9950X has 16-cores, 32 threads, and a max boost frequency of 5.7 GHz.

The other processors have slightly lower clock speeds and fewer cores but are considerably cheaper. The AMD Ryzen 5 9600X, for example, has six cores and boosts to 5.4 GHz, but is less than half the price of the Ryzen 9 9950X. The full line up can be seen in the table right.

The Ryzen X3D lineup features significantly larger L3 caches than standard Ryzen processors. This increased cache size gives the CPU fast access to more data, instead of having to

fetch the data from slower system memory (RAM). The flagship 16-core AMD Ryzen 9 9950X3D features 128 MB of cache, but the 3D V-Cache is limited to one of its two CCDs.

All the new ‘Zen 5’ Ryzen 9000 chips are more power efficient than the previous generation ‘Zen 4’ Ryzen 7000 Series. This has allowed AMD to reduce the Thermal Design Power (TDP) on a few of the standard Ryzen models. The top-end 16-core processors — the Ryzen 9 9950X and Ryzen 9 9950X3D — both have a TDP of 170W and a peak power of 230W. All the others are rated at 65W or 120W.

Intel Core Ultra 200S “Arrow Lake” Intel Core Ultra marks a departure from Intel’s traditional generational numbering system (e.g., 14th Gen).

But the Intel Core Ultra 200S (codenamed Arrow Lake) is not just an exercise in branding. It marks a major change in the design of its desktop processors, moving to a tiled architecture (Intel’s term for chiplets).

Like 14th Gen Intel Core, the Intel Core Ultra 200S features two different types of cores: Performance-cores (P-cores) for primary tasks and slower Efficient-cores (E-cores) for background processing.

In a bold move, Intel has dropped Hyper-Threading from the design, a feature that was previously supported on the P-cores in 14th Gen Intel Core.

Like AMD, DDR5 memory is standard, with a maximum capacity of 192 GB. However, the data rate doesn’t vary as much depending on the amount installed. For instance, with 64 GB, the speed reaches 5,600 MT/s, while with 128 GB, it only drops slightly to 4,800 MT/s.

The integrated GPU has been improved, but most 3D workflows will still require an add-in GPU. For AI tasks, there’s an integrated NPU, but at 13 TOPS it’s not powerful enough to meet Microsoft’s requirements for Windows Copilot+.

The processor family includes three main models. At the high end, the Intel Core Ultra 9 285K features 8 P-cores and 16 E-cores. The P-cores operate at a base frequency of 3.7 GHz, with a maximum Turbo of 5.7 GHz. It has a base power of 125 W and draws 250 W at peak.

At the entry level, the Intel Core Ultra 5 245K offers 6 P-cores and 8 E-cores, with a base frequency of 4.2 GHz and a max Turbo of 5.2 GHz. It has a base power of 125 W, rising to 159 W under Turbo. The full lineup is detailed on the previous page.

Test setup

For our testing, we focused on the flagship models from each standard processor

family: the AMD Ryzen 9 9950X (16 cores, 32 threads) and the Intel Core Ultra 9 285K (8 P-cores, 16 E-cores). We also included the AMD Ryzen 7 9800X3D (8 cores, 16 threads) which, at the time, was the most powerful Ryzen 9000 Series chip with 3D V-Cache. At CES a few weeks ago, AMD announced the 12-core Ryzen 9 9900X3D and the 16-core Ryzen 9 9950X3D but these 3D V-Cache processors were not available for testing.

The AMD Ryzen 9 9950X and Intel Core Ultra 9 285K were housed in very similar workstations — both from specialist UK manufacturer, Scan. Apart from the CPUs and motherboards, the other specifications were almost identical.

The AMD Ryzen 7 9800X3D workstation came from Armari. All machines featured different GPUs, but our tests focused on CPU processing, so this shouldn’t impact performance. The full specs can be seen below. Testing was done on Windows 11 Pro 26100 with power plan set to high-performance.

AMD Ryzen 9 9950X

Scan 3XS GWP-A1-R32 workstation

See review on page WS16

• Motherboard: Asus Pro Art B650 Creator

• Memory: 64 GB (2 x 32 GB) Corsair DDR5 (5,600 MT/s)

• GPU: Nvidia RTX 4500 Ada Gen

• Storage: 2TB Corsair MP700 Pro SSD

• Cooling: Corsair Nautilus 360 cooler

• PSU: Corsair RM750e PSU

Intel Core Ultra 9 285K

Scan 3XS GWP-A1-C24 workstation

See review on page WS16

• Motherboard: Asus Prime Z890-P

• Memory: 64 GB (2 x 32 GB) Corsair DDR5 (5,600 MT/s)

• GPU: Nvidia RTX 2000 Ada Gen

• Storage: 2TB Corsair MP700 Pro SSD

• Cooling: Corsair Nautilus 360 cooler

• PSU: Corsair RM750e PSU

AMD Ryzen 7 9800X3D

Armari Magnetar MM16R9 workstation

See review on page WS20

• Motherboard: ASUS ROG Strix AMD B650E-I Gaming WiFi Mini-ITX

• Memory: 96 GB (2 x 48 GB) Corsair Vengeance DDR5-6000C30 EXPO (5,600 MT/s)

• GPU: AMD Radeon Pro W7500

• Storage: 2TB Samsung 990 Pro SSD

• Cooling: Armari SPX-A6815NGR 280mm AIO+NF-P14 redex

• PSU: Thermaltake Toughpower SFX 850W ATX3.0 Gen5

On test

We tested all three workstations with a range of real-world applications used in AEC and product development. Where data existed, and was relevant, we also compared performance figures from older generation processors. This included mainstream models (12th, 13th and 14th Gen Intel Core, AMD Ryzen 7000) and high-end workstation processors (AMD Ryzen 7000 Threadripper and Threadripper Pro, Intel Xeon W-3400, and 4th Gen Intel Xeon Scalable).

Data for AMD Threadripper came from standard and overclocked workstations. In the benchmark charts, 90°C refers to the max temp set in the Armari Magnetar M64T7 ‘Level 1’ PBO (see Workstation Special Report 2024 - tinyurl.com/WSR24), while 900W refers to power draw of the processor in the Comino Grando workstation (see page WS22)

The comparisons aren’t entirely applesto-apples — older machines were tested with different versions of Windows 11, as well as varying memory, storage, and cooling configurations. However, the results should still provide a solid approximation of relative performance.

CAD and BIM

Dassault Systèmes Solidworks (CAD) and Autodesk Revit (BIM) are bread and butter tools for designers, engineers, and architects. For the most part, these applications are single-threaded, although some processes are able to utilise a few CPU cores. Ray-trace rendering stands out as the exception, taking full advantage of all available cores.

In the Autodesk Revit 2025 RFO v3 benchmark the AMD Ryzen 9 9950X came out top in the model creation and export tests, in which Intel has traditionally held an edge. The AMD Ryzen 7 9800X3D performed respectably, but with its slightly lower maximum frequency, lagged behind a little.

In Solidworks 2022, things were much more even. In the rebuild, convert, and simulate subtests of the SPECapc benchmark, there was little difference between the AMD Ryzen 9 9950X and the Intel Core Ultra 9 285K. However, in the mass properties and boolean subtests, the Ryzen 9 9950X pulled ahead, only to be outshined by the Ryzen 7 9800X3D. Despite the 9800X3D having a lower clock speed, it looks like the additional cache provides a significant performance boost.

But how do the new chips compare to older generation processors? Our data shows that while there are improvements, the performance gains are not huge.

AMD’s performance increases ranged

‘‘
AMD’s cache-rich Ryzen 9000 X3D variants look particularly appealing for select workflows where having superfast access to a large pool of frequently used data makes them shine ’’

from 7% to 22% generation-on-generation, although the Ryzen 9 9950X was 9% slower in the mass properties test. Intel’s improvements were more modest, with a maximum gain of just 9%. In fact, in three tests, the Intel Core Ultra 9 285K was up to 19% slower than its predecessor.

Looking back over the last three years, Intel’s progress appears incremental. Compared to the Intel Core i9-12900K, launched in late 2021, the Intel Core Ultra 9 285K is only up to 26% faster.

Ray trace rendering Ray trace rendering is exceptionally multithreaded, so can take full advantage of all CPU cores. Unsurprisingly, the processors with the highest core counts — the AMD Ryzen 9 9950X (16 cores) and Intel Core Ultra 9 285K (24 cores) — topped our tests.

The Ryzen 9 9950X outperformed the Intel Core Ultra 9 285K in several benchmarks, delivering faster performance in V-Ray (17%), CoronaRender (15%), and KeyShot (11%). Intel’s decision to drop Hyper-Threading may have contributed to this performance gap, though Intel still claimed a slight lead in Cinebench, with a 5% advantage.

Gen-on-gen improvements were modest. Intel showed gains of 4% to 17%, while AMD delivered between 5% and 11% faster performance.

We also ran stress tests to assess sustained performance. In several hours of rendering in V-Ray, the Ryzen 9 9950X held steady at 4.91 GHz, while the Ryzen 9 9800X3D maintained 5.17 GHz. Meanwhile, the P-cores of the Intel Core Ultra 9 285K reached 4.86 GHz.

Power consumption is another important consideration. The Ryzen 9 9950X drew 200W, whereas the Intel Core Ultra 9 285K peaked at 240W — slightly lower than its predecessor, 14th Gen Intel Core.

Since rendering scales exceptionally well with higher core counts, the best performance is achieved with high-end workstation processors like AMD Ryzen Threadripper Pro.

Simulation (FEA and CFD)

Engineering simulation encompasses Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD),

both of which are extremely demanding computationally.

FEA and CFD utilise a variety of solvers, each with unique behaviours, and performance can vary depending on the dataset. Generally, CFD scales well with additional CPU cores, allowing studies to solve significantly faster. Moreover, CFD performance benefits greatly from higher memory bandwidth, making these factors critical for optimal results.

For our testing, we selected three workloads from the SPECworkstation 3.1 benchmark and one from SPECworkstation 4.0. The CFD tests included Rodinia (representing compressible flow), WPCcfd (modelling combustion and turbulence), and OpenFoam with XiFoam solver. For FEA, we used CalculiX, which simulates the internal temperature of a jet engine turbine.

The Intel Core Ultra 9 285K claimed the top spot in all the tests. The AMD Ryzen 9 9950X followed in second place, except in the OpenFoam benchmark, where it was outperformed by the Ryzen 9 9800X3D— likely due to the additional cache.

Of course, for those deeply invested in simulation, high-end workstation processors, such as AMD Ryzen Threadripper Pro and Intel Xeon offer a significant advantage, thanks to their higher core counts and superior memory bandwidth. For a deeper dive, check out last year’s workstation special report: www.tinyurl.com/WSR24.

Reality modelling

Reality modelling is becoming prevalent in the AEC sector. Raw data captured by drones (photographs / video) and terrestrial laser scanners must be turned in point clouds and reality meshes — a process that is very computationally intensive.

We tested a range of workflows using three popular tools: Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture by Capturing Reality, a subsidiary of Epic Games.

As many of the workflows in these applications are multi-threaded, we were surprised that the 8-core AMD Ryzen 9800X3D outperformed the 16-core AMD Ryzen 9950X and 24-core Intel Core Ultra 9 285K in several tests. This is likely due to its significantly larger cache, but possibly down to its single CCD

design, which houses all 8 CPU cores.

In contrast, the 16-core AMD Ryzen 9950X, which is made up of two 8-core CCDs, may suffer from latency when cores from different CCDs need to communicate with each other. It will be interesting to see how the recently announced 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D compare.

The other point worth noting is the impact of memory speed. In some workflows we experienced dramatically faster computation with faster memory. Simultaneous multi-threading (SMT) also had an impact on performance.

We explore reality modelling in much more detail on page WS33, where you will also find all the benchmark results.

The verdict

For the past few years, Intel and AMD have been battling it out in the mainstream processor market. Intel has traditionally dominated single threaded and lightly threaded workflows like CAD, BIM, and reality modelling, while AMD has been the go-to choice for multithreaded rendering.

But the landscape is shifting. With the ‘Zen 5’ AMD Ryzen 9000 Series, AMD is starting to take the lead in areas where Intel once ruled supreme. For instance, in Solidworks CAD, AMD is delivering solid generation-on-generation performance improvements, while Intel appears to be stagnating. In fact, some workflows show the Intel Core Ultra 200S trailing behind older 14th Gen Intel Core processors.

That said, for most workstation users, AMD’s rising stock won’t mean much unless major OEMs like Dell, HP, and Lenovo start giving Ryzen the same level of attention they’ve devoted to AMD Ryzen Threadripper Pro. A lot will depend on AMD releasing Pro variants of the Ryzen 9000 Series to meet the needs of enterprise users.

For everyone else relying on specialist manufacturers, workstations with the latest Intel and AMD chips are already available. This includes AMD’s cacherich Ryzen 9000 X3D variants, which look particularly appealing for select workflows where having superfast access to a large pool of frequently used data makes them shine.

Scan 3XS

GWP-A1-C24 & GWP-A1-R32

Between these two attractive desktops, Scan has most bases covered in AEC and product development, from CAD/BIM and visualisation to simulation, reality modelling and beyond, writes Greg Corke

Specialist workstation manufacturers like Scan often stand out from the major OEMs, as they offer the very latest desktop processors. The Scan 3XS GWP-A1-C24 features the new “Arrow Lake” Intel Core Ultra 200S Series (with the C in the model name standing for Core) while the Scan 3XS GWP-A1-R32 offers the ‘Zen 5’ AMD Ryzen 9000 Series (R for Ryzen). In contrast, Dell, HP, and Lenovo currently rely on older 14th Gen Intel Core processors, while their AMD options are mostly limited to the high-end Ryzen Threadripper Pro 7000 Series.

Both Intel and AMD machines share several Corsair branded components, including 64 GB (2 x 32GB) of Corsair Vengeance DDR5 5600 memory, a 2TB Corsair MP700 Pro SSD, a Corsair Nautilus 360 cooler, and Corsair RM750e PSU.

The 2TB NVMe SSD delivers blazingly-fast read and write speeds combined with solid endurance. In CrystalDiskMark it delivered 12,390 MB/sec sequential read and 11,723 MB/sec sequential. Its endurance makes it wellsuited for intensive read / write workflows, such as reality modelling. Corsair backs this up with a five-year warranty or a rated lifespan of 1,400 total terabytes written (TBW), whichever comes first.

GWP-A1-C24

■ Intel Core Ultra 9 285K processor

(3.7 GHz, 5.7 GHz boost) (24 cores - 8 P-cores + 16 E-cores)

■ Nvidia RTX 2000

Ada Generation GPU (16 GB)

■ 64 GB (2 x 32 GB)

Corsair Vengeance DDR5 5,600 memory

■ 2TB Corsair MP700 Pro SSD

■ Asus Prime Z890-P motherboard

■ Corsair Nautilus 360 cooler

■ Corsair RM750e Power Supply Unit

■ Fractal North Charcoal Mesh case (215 x 469 x 447mm)

■ Microsoft Windows 11 Pro 64-bit

■ 3 Years warranty –1st Year Onsite, 2nd and 3rd Year RTB (Parts and Labour)

Ada Generation. This hardware pairing is well-suited to CAD, BIM, and entry-level viz workflows, as well as CPUintensive tasks like point cloud processing, photogrammetry, and simulation.

The downside of the chassis is that it’s relatively large, measuring 215 x 469 x 447mm (W x H x D). However, this spacious design makes accessing internal components incredibly easy, a convenience further enhanced by Scan’s excellent trademark cable management.

The all-in-one (AIO) liquid CPU cooler features a 360mm radiator, bolted on to the top of the chassis. Cooled by three low-duty RS120 fans both machines run cool, and remain very quiet, even when rendering for hours.

■ £2,350 (Ex VAT)

■ scan.co.uk/3xs

The Nvidia RTX 2000 Ada Generation is a compact, low-profile, dual-slot GPU featuring four mini DisplayPort connectors. With a conservative power rating of 70W, it gets all its power directly from the Asus Prime Z890-P motherboard’s PCIe slot. Despite its modest power requirements, it delivered impressive graphics performance in CAD and BIM, easily handling all our 3D modelling tests in Solidworks and Revit. 16 GB of onboard memory allows it to work with fairly large visualisation datasets as well.

Intel Core Ultra 200S Series

Our Intel-based Scan 3XS GWP-A1-C24 workstation was equipped with a top-end Intel Core Ultra 9 285K CPU and an entrylevel workstation GPU, the Nvidia RTX

In real-time visualisation software, don’t expect silky smooth navigation with large models at high resolutions. However, 3D performance is still acceptable. In Chaos Enscape, for example, we got 14 frames per second (FPS) at 4K with our demanding school project test scene.

From the exterior, both Scan workstations share the same sleek design, housed in the Fractal North Charcoal Mesh case with dark walnut wood strips on the front. While wood accents in PC cases can sometimes feel contrived, this ATX Mid-Tower strikes an excellent balance between form and function. Its elegant, minimalist aesthetic enhances the overall visual appeal without compromising airflow. Behind the wooden façade, an integrated mesh ensures efficient ventilation, with air drawn in through the front and expelled through the rear and top. Adding to its refined look, the case has understated brass buttons and ports on the top, including two USB 3.0 Type-A, one USB 3.1 Gen2 Type-C, as well as power button, mic, and HD audio ports.

renders in 1,100 seconds, just under twice as long

Outputting ray trace renders in KeyShot, V-Ray and Twinmotion was noticeably slower compared to more powerful Nvidia RTX GPUs. That said, it’s still a viable solution if you’re willing to wait. In Twinmotion, for example it cranked out five 4K path traced renders in 1,100 seconds, just under twice as long as it took the Nvidia RTX 4500 Ada Generation in Scan’s Ryzen-based workstation. In CPU workflows, the Intel Core Ultra 9 285K CPU delivered mixed results. While it outperformed the AMD Ryzen 9 9950X in a few specific workflows (as detailed in our indepth article on page WS10), the performance gains over 14th Gen Intel Core processors, which launched in Q4 2023, were relatively minor. In fact, in some workflows, it even lagged behind Intel’s previous generation

4500 Ada Generation

depth article on WS10 gains over 14th Gen minor. In fact, in some

flagship mainstream CPU, the Intel Core i9-14900K.

One advantage that Scan’s Intel workstation holds over its AMD counterpart is in memory performance. Both machines were configured with 64 GB of DDR5 RAM running at 5,600 MT/s. However, when memory is increased to 128 GB, filling all four DIMM slots, the memory clock speed must be reduced to keep everything stable. On the Intel system, it only drops a little, down to 4,800 MT/s, but on the AMD system, it’s much more significant, falling to 3,600 MT/s. This reduction can have a notable impact on performance in memoryintensive tasks like simulation and reality modelling, giving the Intel system an edge when working with large datasets in select workflows.

AMD Ryzen 9000 Series

Our AMD-based Scan 3XS GWPA1-R32 workstation is set up more for visualisation, with an Nvidia RTX 4500 Ada Generation GPU (24 GB) paired with the top-end AMD Ryzen 9 9950X CPU.

The full length double height Nvidia GPU is rated at 210W, so must draw some of its power directly from the 750W power supply unit (PSU). It comes with four DisplayPort connectors.

The RTX 4500 Ada marks a big step up from the RTX 2000 Ada. In real-time viz software Enscape we got double the frame

workstation

rates at 4K resolution (28.70 FPS), and more than double the performance in most of our ray trace rendering tests. With 50% more on-board memory, you also get more headroom for larger viz datasets.

The CPU performance of the system was equally impressive. While the previous generation Ryzen 7000 Series enjoyed a lead over its Intel equivalent in multi-threaded ray tracing, it lagged behind in single threaded workflows. But with the Ryzen 9000 Series that’s no longer the case. AMD has significantly improved single threaded performance gen-on-gen, while Intel’s performance has stagnated a little. It means AMD is now sometimes the preferred option in a wider variety of workflows.

But the Scan 3XS GWP-A1-R32 is not without fault. In select reality modelling workflows, it was significantly slower than its Intel counterpart. We expect this is down to its dual chiplet (CCD) design, something we explore in more detail on page WS10

Also, as mentioned earlier, those that need more system memory will have to accept significantly slower memory speeds on AMD than with Intel. This can impact performance dramatically. When aligning images in Capturing Reality, for instance, going from 64 GB (5,600 MT/s) to 128 (3,600 MT/s) on the AMD workstation, saw computation times increase by as much as 64%. And

in simulation software, OpenFoam CFD, performance dropped by 31%.

Conclusion

Both Scan 3XS workstations are impressive desktops, offering excellent performance housed in aesthetically pleasing chassis. The choice between Intel and AMD depends on the specific demands of your workflows.

In terms of CAD and BIM, performance is similar across both platforms, as shown in our benchmark charts on page WS25 For visualisation, AMD holds a slight edge, but this may not be a deciding factor if your visualisation tasks rely more on GPU computation rather than CPU computation.

When it comes to reality modelling, Intel may not always have the lead, but it offers more consistent performance across various tasks. Additionally, Intel’s support for faster memory at larger capacities could make a significant difference. With 128 GB, Intel can achieve noticeably faster memory speeds, which translates into potential performance gains in certain workflows.

Ultimately, both machines are fully customisable, allowing you to select the components that best match your specific needs. Whether you prioritise raw processing power, memory speed, or GPU performance, Scan offers flexibility to tailor the workstation to your requirements.

GWP-A1-R32

Review: Boxx Apexx A3

This compact desktop with liquidcooled ‘Zen 5’ AMD Ryzen 9000 Series processor and Nvidia RTX 5000 Ada Generation GPU is a powerhouse for design viz, writes Greg Corke

In the world of workstations, Boxx is somewhat unique. Through its extensive reseller channel, it has the global reach of a major workstation OEM, but the technical agility of a specialist manufacturer.

Liquid cooling is standard across many of its workstations, and you can always expect to see the latest processors soon after launch. And there’s a tonne to choose from. In addition to workstation staples like Intel Core, Intel Xeon, AMD Ryzen Threadripper Pro, and (to a lesser extent) AMD Ryzen, Boxx goes one step further with AMD Epyc, a dual socket processor typically reserved for servers. The company also stands out for its diverse range of workstation form factors, including desktops, rack-mounted systems, and high-density datacentre solutions.

Boxx played a key role in AMD’s revival in the workstation market, debuting the AMD Ryzen-powered Apexx A3 in 2019.

The latest version of this desktop workstation may look identical on the outside, but inside, the new ‘Zen 5’ AMD Ryzen 9000 Series chip is a different beast entirely. 2019’s ‘Zen 2’ AMD Ryzen 3000 Series stood out for its multithreaded performance but fell short of Intel in single-threaded tasks critical for CAD and BIM. Now, as we explore in our ‘Intel vs. AMD’ article on page WS10 , AMD has the edge in a much broader range of workflows.

workstation special report

Ryzen 9000-based workstation in this report — the Scan 3XS GWP-A1-R32 - which we review on page WS16

The chassis offers several practical features. The front mesh panel easily clips off, providing access to a customerreplaceable filter. The front I/O panel is angled upward for convenient access to the two USB 3.2 Gen 2 (Type-A) ports and one USB 3.2 Gen 2 (Type-C) port. Around the back, you’ll find an array of additional ports, including two USB 4.0 (Type-C), three USB 3.2 Gen 1 (Type-A), and five USB 3.2 Gen 2 (Type-A).

For connectivity, there’s fast 802.11ab Wi-Fi 7 with rearmounted antennas, although most users — particularly those working with data from a central server — are likely to utilise the 5 Gigabit Ethernet LAN for maximum speed and reliability.

Product spec

■ AMD Ryzen 9 9950X processor (4.3 GHz, 5.7 GHz boost) (16-cores, 32 threads)

■ 96 GB (2 x 48 GB) Crucial DDR5 memory (5,600 MT/s)

■ 2TB Crucial T705 NVMe PCIe 5.0 SSD

■ Asrock X870E Taichi motherboard

■ Nvidia RTX 5000 Ada Generation GPU (32 GB)

■ Asetek 624T-M2 240mm All-in-One liquid cooler

■ Boxx Apexx A3 case (174 x 388 x 452mm)

■ Microsoft Windows 11 Pro

■ 3 Year standard warranty

■ USD $8,918 (starting at $3,655)

■ www.boxx.com www.boxx-tech.co.uk

The chassis layout is different to most other workstations of this type, with the motherboard flipped through 180 degrees, leaving the rear I/O ports at the bottom and the GPUs at the top — upside down.

To save space, the power supply sits almost directly in front of the CPU. This wouldn’t be possible in an air-cooled system, because the heat sink would get in the way. But with the Boxx Apexx A3, the CPU is liquid cooled, and the compact All-in-one (AIO) Asetek closed loop cooler draws heat away to a 240mm radiator, located at the front of the machine.

The Boxx Apexx A3 is crafted from aircraft-grade aluminium, delivering a level of strength that surpasses off-theshelf cases used by many custom manufacturers. Considering it can host up to two high-end GPUs, it’s surprisingly compact, coming in at 174 x 388 x 452mm, significantly smaller than the other AMD

Our test machine came with the 16core AMD Ryzen 9 9950X, the flagship model in the standard Ryzen 9000 Series. Partnered with the massively powerful Nvidia RTX 5000 Ada Generation GPU, this workstation screams design visualisation. And it has some serious clout.

Our test machine’s focus on GPU computation means the AMD Ryzen 9 9950X’s 16 cores may spend a good amount of time under utilised. Opting for a CPU with fewer cores could save you some cash, though it would come with a slight reduction in single-core frequency.

As it stands, the system delivers impressive CPU benchmark scores across CAD, BIM, ray-trace rendering, and reality modelling. However, in some tests, it was narrowly outperformed by the 3XS GWPA1-R32 and when pushing all 16 cores to their limits in V-Ray, fan noise was a little bit more noticeable (although certainly not loud).

Boxx configured our test machine with 96 GB of Crucial DDR5 memory, carefully chosen to deliver the maximum capacity with the fastest performance. With two 48 GB modules, it can run at 5,600 MT/s. Anything above that, up to a maximum of 192 GB, would see speeds drop significantly.

Rounding out the specs is a 2TB Crucial T705 SSD, the fastest PCIe 5.0 drive we’ve tested. It delivered exceptional sequential read/write speeds in CrystalDiskMark, clocking in at an impressive 14,506 MB/s read and 12,573 MB/s write — outpacing the Corsair MP700 Pro in the Scan 3XS workstation. However, it’s rated for 1,200 total terabytes written (TBW), giving it slightly lower endurance.

The Asrock X870E Taichi motherboard includes room for a second SSD, while the chassis features two hard disk drive (HDD) cradles at the top. However, with modern SSDs offering outstanding price, performance, these cradles are likely to remain empty for most users.

In Twinmotion it delivered five 4K path traced renders in a mere 342 seconds and in Lumion four FHD ray trace renders in 70 seconds. That’s more than three times quicker than an Nvidia RTX 2000 Ada. And with 32 GB of onboard memory to play with, the GPU can handle very complex scenes.

The verdict

The Boxx Apexx A3 is a top-tier compact workstation, fully customisable and built to order, allowing users to select the perfect combination of processors to meet their needs. Among specialist system builders, Boxx is probably the closest competitor to the major workstation OEMs like Dell, HP, and Lenovo. However, none of these major players have yet released an AMD Ryzen 9000-based workstation — and given past trends, there’s no guarantee they will. This gives Boxx a particular appeal, especially for companies seeking a globally available product powered by the latest ‘Zen 5’ AMD Ryzen processors.

ENGINEERED, NOT JUST ASSEMBLED IT LOOKS LIKE ART AND WORKS LIKE GRANDO GRANDO – A PRODUCT LINE BY COMINO

Comino Grando

Liquid-Cooled Silent Workstations & High-Performance Multi-GPU Servers

Designed for AI – training, fine tuning, inference, deep learning and more

Boosted in Performance by up to 50% –outperform standard air-cooled machines

Reliable in Operation within premises up to 40°C – stays cool and quiet under demanding conditions

Unique Configurations – Scale up to 8 high-end GPUs (NVIDIA RTX 6000 ADA, H200, RTX 5090)

Optimized with leading AI frameworks and inference tools – Stable Diffusion, Llama, Mid Journey, Hugging Face, PyTorch, TensorFlow, Character.AI, QuillBot, DALLE and more

Engineering as Art

Meticulously selected and engineered components maximize longevity and performance

Controller – the System’s Core Independent, autonomous monitoring ensures constant oversight and stability

Full-Cover Comino CPU Water Block

Cools both CPU and power circuitry for peak performance

Single-Slot Comino GPU Water Blocks

Uniquely designed for top efficiency and a dense compute

API Integration

Compatible with modern monitoring tools like Grafana and Zabbix

Comprehensive Sensors

Track temperatures, airflow, coolant level, flow and more for precise analysis

Compact, Modular & Easily Serviced Chassis

Quick access for minimal downtime

* GRANDO systems are compatible with EPYC, Threadrippiper, Xeon and Xeon W CPUs, NVIDIA RTX A6000, A40, RTX 6000 ADA, L40S, A100, H100, H200, RTX 3090, RTX 4090, RTX 5090, AMD Radeon PRO W7900, Radeon 7900XTX GPUs. ** Server equipped with redundant power supply system for 24/7 stable operation.

Review: Armari

Magnetar MM16R9

This compact desktop workstation, built around the gamer-favourite Ryzen X3D processor, is also a near perfect fit for reality modelling, writes Greg Corke

The first AMD Ryzen processor to feature AMD 3D V-Cache technology launched in 2022. Since then, newer versions have become the processors of choice for hardcore gamers. This is largely thanks to the additional cache — a superfast type of memory connected directly to the CPU — which can dramatically boost performance in certain 3D games. As we discovered in our 2023 review of the ‘Zen 4’ AMD Ryzen 9 7950X3D, that applies to some professional workflows too.

With the launch of the ‘Zen 5’ AMD Ryzen 9000 Series, AMD has opted for a staggered release of its X3D variants. The 8-core AMD Ryzen 7 9800X3D was first out the blocks in November 2024. Now the 12-core AMD Ryzen 9 9900X3D and 16-core AMD Ryzen 9 9950X3D have just been announced and should be available soon.

fans spin up during all-core tasks like rendering in V-Ray, the noise is perfectly acceptable for an office environment.

But this is not a workstation you’d buy for visualisation or in indeed CAD or BIM. For those workflows, the non-X3D AMD Ryzen 9000 Series processors would be a better fit, and are also available as options for this machine. For instance, the 16-core AMD Ryzen 9 9950 has a significantly higher singlecore frequency to accelerate CAD, and double the number of cores to cut render times in half.

The X3D chips shine in tasks that benefit from fast access to large amounts of cache. As we detail in our dedicated article on page WS34 , reality modelling is one such workflow. In fact, in many scenarios, Armari’s compact desktop workstation not only outperformed the 16-core AMD Ryzen 9 9950 processor but the 96-core AMD Ryzen Threadripper Pro 7995WX as well.

Product spec

■ AMD Ryzen 7 9800X3D processor (4.7 GHz, 5.2 GHz boost) (8-cores, 16 threads)

■ 96 GB (2 x 48 GB) Corsair Vengeance DDR5-6000C30 EXPO memory (5,600 MT/s)

■ 2TB Samsung 990 Pro M.2 NVMe PCIe4.0 SSD

■ ASUS ROG Strix AMD B650E-I Gaming Wifi Mini-ITX Motherboard

■ AMD Radeon Pro W7500 GPU (8 GB)

■ Armari SPXA6815NGR 280mm AIO+NF-P14 redex CPU Cooler

■ Coolermaster MasterBox NR200P Mini ITX case (376 x 185 x 292mm)

■ Microsoft Windows 11 Pro

■ Armari 3 Year basic warranty

■ £1,999 (Ex VAT)

■ www.armari.com

anything above 96 GB requires the memory speed to be lowered to 3,600 MT/s. This reduction can lead to noticeable performance drops in some memory-intensive reality modelling workflows.

Armari, true to form, is continually looking for ways to improve performance. Just before we finalised this review, the company sent an updated machine with 48 GB (2 x 24 GB) of faster 8,000 MT/s G.Skill Tri Z5 Royal Neo DDR5 memory, paired with the newer Asus AMD ROG Strix B850-I ITX motherboard.

UK manufacturer Armari has been a long term advocate of AMD Ryzen processors and has now built a brand-new workstation featuring the AMD Ryzen 9800X3D. With a 120W TDP, rising to 162W under heavy loads, it’s relatively easy to keep cool. This allows Armari to fit the chip into a compact Coolermaster MasterBox NR200P Mini ITX case, which saves valuable desk space. Even though the components are crammed in a little, the 280mm AIO CPU cooler ensures the system runs quiet. While the

However, the workstation is not quite the perfect match for mainstream reality modelling. While the AMD Radeon Pro W7500 GPU is great for CAD, it’s incompatible with select workflows in Leica Cyclone 3DR and RealityCapture from Epic Gamesthose accelerated by Nvidia CUDA. Here, the Nvidia RTX A1000, an equivalent 8 GB GPU, would be the better option.

not quite the perfect match for mainstream reality the those Here, 8 (2 x 48

In our tests, this new setup provided a slight (1-2%) performance boost in some reality modelling tasks. However, since our most demanding test requires 60 GB of system memory and 48 GB is the current maximum capacity for this memory speed, it’s hard to fully gauge its potential. For the time-being, the higherspeed memory feels like a step toward future improvements, pending the release of larger-capacity kits.

Having more cache probably isn’t the only reason why the 9800X3D procesor excels. Because the chip is made from a single CCD, there’s less latency between cores. We delve into this further in our reality modelling article on page WS34. It will be fascinating to see how the 12-core and 16-core X3D chips compare.

The test machine came with 96 GB (2 x 48 GB) of Corsair Vengeance DDR5-6000C30 Expo memory, running at 5,600 MT/s. While the system supports up to 192 GB,

If we were to look for faults, it would be that the machine’s top panel connections are USB-A only, which is too slow to transfer terabytes of reality capture data quickly, but Armari tells us that production systems will have a front USB-C Gen 2x2 port.

Overall, Armari has done it again with another outstanding workstation. It’s not just powerful — it’s compact and portable as well — which could be a big draw for construction firms that need to process reality data while still on site.

‘‘
The Armari Magnetar MM16R9 is not just powerful — it’s compact and portable — which could be a big draw for construction firms that need to process reality data on site

Review: Comino Grando workstation RM

This desktop behemoth blurs the boundaries between workstation and server and, with an innovative liquid cooling system, delivers performance like no other, writes Greg Corke

Firing up a Comino Grando feels more like prepping for take-off than powering on a typical desktop workstation. Pressing both front buttons activates the bespoke liquid cooling system, which then runs a series of checks, before booting into Windows or Linux.

The cooling system is an impressive feat of precision engineering. Comino manufactures its own high-performance water blocks out of copper and stainless steel. And these are not just for the CPU. Unlike most liquid cooled workstations, the Comino Grando takes care of the GPUs and motherboard VRMs as well. It’s only the system memory, and storage that are cooled by air in the traditional way.

Not surprisingly, this workstation is all about ultimate performance. This is exemplified by the 96-core AMD Threadripper Pro 7995WX processor, which Comino pushes to the extreme. While most air-cooled Threadripper Pro workstations keep the processor at its stock 350W, Comino cranks it up to an astonishing 900W+, with the CPU settling around 800W during sustained multi-core workloads. That’s a lot of electricity to burn.

The result, however, is truly astonishing all-core frequencies. During rendering in Chaos V-Ray, the 96-core chip initially hit an incredible 4.80 GHz, before landing on a still-impressive 4.50 GHz. Even some workstations with fewer cores struggle to

maintain these all core speeds.

Not surprisingly, the test scores were off the chart. In the V-Ray 5.0 benchmark, it delivered an astonishing score of 145,785 — a massive 42% faster than an air-cooled Lenovo ThinkStation P8, with the same 96-core processor.

The machine also delivered outstanding results in our simulation benchmarks. Outside of dual Intel Xeon Platinum workstations — which Comino also offers — it’s hard to imagine anything else coming close to its performance.

As you might expect, running a machine like this generates some serious heat. Forget portable heaters — rendering genuinely became the best way to warm up my office on a chilly winter morning.

While the CPU delivers a significant performance boost, the liquid cooled GPUs run at standard speeds. Comino replaces the original air coolers with a slim water block, a complex process that’s explained well in this video (www.tinyurl.com/Comino-RTX)

■ AMD Ryzen Threadripper Pro 7995WX processor (2.5 GHz, 5.1 GHz boost) (96-cores, 192 threads)

■ 256 GB (8 x 32 GB) Kingston RDIMM DDR5 6400Mhz CL32 REG ECC memory

■ 2TB Gigabyte

Aorus M.2 NVMe 2280 (PCIe 4.0) SSD

■ Asus Pro WS WRX90E-SAGE motherboard

■ 2 x Nvidia RTX 6000 Ada Gen GPU (48 GB)

■ Comino custom liquid cooling system

■ Comino Grando workstation chassis 439 x 681 x 177mm)

■ Microsoft Windows 11 Pro

■ 2-year warranty (upgradable to up to 5 years with on-site support)

■ £31,515 (Ex VAT)

4 x 4TB M.2 SSD RAID 0 upgrade

■ £33,515 (Ex VAT) With 2 x AMD Radeon

■ £24,460 (Ex VAT)

■ www.grando.ai

This design allows each GPU to occupy just a single PCIe slot on the motherboard, compared to the two or three slots required by the same high-end GPU in a typical workstation. Normally, modifying a GPU like this would void the manufacturer’s warranty. However, Comino offers a full two years, covering the entire workstation, with the option to extend up to five.

The machine can accommodate up to seven GPUs — though these are limited to mid-range models. For high-end professional GPUs, support is capped at four cards, although Comino offers a similar server with more power and

Product spec 1

noisier fans that can host more. Options include the Nvidia RTX 6000 Ada Generation (48 GB), Nvidia L40S (48 GB), Nvidia H100 (80 GB), Nvidia A100 (80 GB), and AMD Radeon Pro W7900 (48 GB). Keen observers will notice many of these GPUs are designed for compute workloads, such as engineering simulation and AI. Most notably, a few are passively cooled, designed for datacentre servers, so are not available in traditional workstations.

For consumer GPUs, the system can handle up to two cards, such as the Nvidia GeForce RTX 4090 (24 GB) and AMD Radeon 7900 XTX (24 GB). Comino is also working on a solution for 2 x Nvidia H200 (141 GB) or 2 x Nvidia GeForce RTX 5090 (32 GB).

Our test machine was equipped with a pair of Nvidia RTX 6000 Ada Generation GPUs. These absolutely ripped through our GPU rendering benchmarks, easily setting new records in tests that are multi-GPU aware. Compared to a single Nvidia RTX 6000 Ada GPU, V-Ray was around twice as fast. The gains in other apps were less dramatic, with an 83% uplift in Cinebench and 65% in KeyShot.

Liquid magic

Comino’s liquid cooling system is custom-built, featuring bespoke water blocks and a 450ml coolant reservoir with integrated pumps.

Coolant flows through high-quality flexible rubber tubing, passing from component to component before completing the loop via a large 360mm radiator located at the rear of the machine.

2

Positioned alongside this radiator are three (yes, three) 1,000W SFX-L PSUs.

The system is cooled by a trio of Noctua 140mm 3,000 RPM fans, which drive airflow from front to back. Cleverly, the motherboard is housed in the front section of the chassis, ensuring the coldest air passes over the RAM and other aircooled components.

surprisingly straightforward.

Swapping out a GPU, while more intricate than on a standard desktop, isn’t as challenging as you might expect.

For upgrades, Comino can ship replacement GPUs pre-fitted with custom cooling blocks and rubber tubes. For our testing, Comino supplied a pair of AMD Radeon Pro W7900s. Despite their singleslot design,

process easy, with colour-coded blue and red connectors for cold and warm lines. Thanks to Comino’s no-spill design, the tubes come pre-filled with coolant, so there’s no need to add more after installation. (If you’re curious about the details, Comino provides a step-by-step guide in this video - www.tinyurl.com/Comino-GPU)

Users are given control over the fans. Using the buttons on the front of the machine, one can select from max performance, normal, silent, or super silent temperature profiles — each responding exactly how you’d expect in terms of acoustics.

Naturally, coolant evaporates over time and will need occasional topping up. Comino recommends checking levels every three months, which is easy to do via the reservoir window on the front panel. A bottle of coolant is included in the box for convenience.

As for memory and storage, they’re aircooled, making their maintenance no different from a standard desktop workstation.

All of our testing was conducted in ‘normal mode,’ where the noise level was consistent and acceptable. The ‘max performance’ mode, however, was much louder — better suited to a server room — and didn’t even show a significant performance boost. On the other hand, ‘super silent’ mode delivered an impressively quiet experience, with only a 3.5% drop in V-Ray rendering performance.

3.5% drop in V-Ray rendering performance. The front LED text display is where tech enthusiasts can geek out, cycling through deceptively heavy, weighing

The front LED text display is where tech enthusiasts can geek out, cycling through metrics like flow rates, fan and pump RPM, and the temperatures of the air, coolant, and components. For a deeper dive, the Comino Monitoring System offers access to this data and more via a web browser.

Maintenance and upgrades

With such an advanced cooling system, the Comino Grando can feel a bit intimidating. Thankfully, end user maintenance is

these GPUs are deceptively heavy, weighing in at 1.9 kg each —significantly more than the 1.2 kg of a stock W7900 fitted with its standard cooler. It’s easy to see why a crossbar bracket is essential to keep these hefty GPUs securely in place.

Installing the GPU is straightforward: plug it into the PCIe slot, secure it with screws as usual, and then plumb in the cooling system. The twist-and-click Quick Disconnect Couplings (QDCs) make this

Our system was equipped with 256 GB of high-speed Kingston DDR5 6,400 MHz CL32 REG ECC memory, operating at 4,800 MT/s. All eight slots were fully populated with 32 GB modules, maximising the Threadripper Pro processor’s 8-channel memory architecture for peak performance. For workloads requiring massive datasets, the system can support up to an impressive 2 TB of memory.

The included SSD is a standard 2TB Gigabyte AORUS Gen4, occupying one of the four onboard M.2 slots. However, there’s plenty of scope for performance upgrades. One standout option is the HighPoint SSD7505 PCIe 4.0 x16

4-channel NVMe RAID controller, which can be configured with four 4TB PNY XLR8 CS3140 M.2 SSDs in RAID 0 for blisteringly fast read/write speeds.

Rack ‘em up

The Comino Grando blurs the boundaries between workstation and server. It’s versatile enough to fit neatly under a desk or mount in a 4U rack space (rack-mount kit included).

What’s more, with the Asus Pro WS WRX90E-SAGE

SE motherboard’s integrated BMC chip with IPMI (Intelligent Platform Management Interface) for out-ofband management, the Comino Grando can be fully configured as a remote workstation.

The verdict

The Comino Grando is, without question, the fastest workstation we’ve ever tested, leaving air-cooled Threadripper Pro machines from major OEMs in its wake. The only close contender we’ve seen is the Armari Magnetar M64T7, equipped with a liquid-cooled 64-core AMD Ryzen Threadripper 7980X CPU (See our 2024 Workstation Special Report - www.tinyurl.

‘‘ With support for datacentre GPUs, the Comino Grando can potentially transform workflows by giving simulation and AI specialists ready access to vast amounts of computational power on the desktop

Perhaps its most compelling feature, however, is its GPU flexibility. The Nvidia RTX 6000 Ada Generation is a staple for high-end workstations, but very few can handle four — a feat typically reserved for dual Xeons. What’s more, with support for datacentre GPUs, the Comino Grando can potentially transform workflows by giving simulation and AI specialists ready access to vast amounts of computational power on the desktop.

However, you’ll need some serious muscle to lift it into the rack — it’s by far the heaviest workstation we’ve ever encountered. It will come as no surprise to learn that the system arrived on a wooden pallet.

com/WSR24). We wonder how Armari’s 96core equivalent would compare.

While the Comino Grando’s multicore performance is remarkable, what truly sets it apart from others is that it can operate in near-silence. The sheer level of engineering that has gone into this system is extraordinary, with superb build quality and meticulous attention to detail.

Of course, this level of performance doesn’t come cheap, but it can be seen as a smart investment in sectors like aerospace and automotive, where even the smallest optimisations really count.

Surprisingly, the Comino Grando isn’t significantly more expensive than an air-cooled equivalent. For instance, on dell.co.uk, a Dell Precision 7875 with similar specs currently costs just £1,700 less. However, two GPUs is the maximum and it would almost certainly come second in highly multi-threaded workloads.

workstation

Workstations for arch viz

What’s the best GPU or CPU for arch viz? Greg Corke tests a variety of processors in six of the most popular tools – D5 Render, Twinmotion, Lumion, Chaos Enscape, Chaos V-Ray, and Chaos Corona

When it comes to arch viz, everyone dreams of a silky-smooth viewport and the ability to render final quality images and videos in seconds. However, such performance often comes with a hefty price tag. Many professionals are left wondering: is the added cost truly justified?

To help answer this question, we put some of the latest workstation hardware

through its paces using a variety of popular arch viz tools. Before diving into the detailed benchmark results on the following pages, here are some key considerations to keep in mind.

GPU processing

Real-time viz software like Enscape, Lumion, D5 Render, and Twinmotion rely on the GPU to do the heavy lifting. These tools offer instant, high-quality visuals

directly in the viewport, while also allowing top-tier images and videos to be rendered in mere seconds or minutes.

The latest releases support hardware ray tracing, a feature built into modern GPUs from Nvidia, AMD and Intel. While ray tracing demands significantly more computational power than traditional rasterisation, it delivers unparalleled realism in lighting and reflections.

GPU performance in these tools is typically evaluated in two ways: Frames Per Second (FPS) and render time. FPS measures viewport interactivity — higher numbers mean smoother navigation and a better user experience — while render time, expressed in seconds, determines how quickly final outputs are generated. Both metrics are crucial, and we’ve used them to benchmark various software in this article.

For your own projects, aim for a minimum of 24–30 FPS for a smooth and interactive viewport experience. Performance gains above this threshold tend to have diminishing returns, although we expect hardcore gamers might disagree. Display resolution is another critical factor. If your GPU struggles to maintain performance, reducing resolution from 4K to FHD can deliver a significant boost.

It’s worth noting that while some arch viz software supports multiple GPUs, this only affects render times rather than viewport performance. Tools like V-Ray, for instance, scale exceptionally well

Nvidia DLSS - using AI to boost performance in real-time

Nvidia DLSS (Deep Learning Super Sampling) is a suite of AI-driven technologies designed to significantly enhance 3D performance (frame rates), in real-time visualisation tools.

Applications including Chaos Enscape, Chaos Vantage and D5 Render, have integrated DLSS to deliver smoother experiences,

and to make it possible to navigate larger scenes on the same GPU hardware.

DLSS comprises three distinct technologies, all powered by the Tensor Cores in Nvidia RTX GPUs:

Super Resolution: This boosts performance by using AI to render higher-resolution frames from lower-resolution

inputs. For instance, it enables 4K-quality output while the GPU processes frames at FHD resolution, saving core GPU resources without compromising visual fidelity.

DLSS Ray Reconstruction: This enhances image quality by using AI to generate additional pixels for intensive ray-traced scenes.

Frame Generation: This increases performance by using AI to interpolate and generate extra frames. While DLSS 3.0 could generate one additional frame, DLSS 4.0, exclusive to Nvidia’s upcoming Blackwellbased GPUs, can generate up to three frames between traditionally rendered ones. When these three

technologies work together, an astonishing 15 out of every 16 pixels can be AI-generated. DLSS 4.0 will soon be supported in D5 Render, promising transformative performance gains. Nvidia has demonstrated that it can elevate frame rates from 22 FPS (without DLSS 4.0) to an incredible 87 FPS.

D5 Render

Chaos V-Ray 6

Chaos Corona is a CPU-only renderer designed for arch viz It scales well with more CPU cores. But the 96-core Threadripper Pro 7995WX, despite having six times the cores of the 16-core AMD Ryzen 9 9950X and achieving an overclocked all-core frequency of 4.87 GHz, delivers only three times the performance.

Chaos V-Ray is a versatile photorealistic renderer, renowned for its realism. It includes both a CPU and GPU renderer. The CPU renderer supports the most features and can handle the largest datasets, as it relies on system memory. Performance scales efficiently with additional cores.

V-Ray GPU works with Nvidia GPUs. It is often faster than the CPU renderer, and can make very effective use of multiple GPUs, with performance scaling extremely well. However, the finite onboard memory can restrict the size of scenes. To address this, V-Ray GPU includes several memorysaving features, such as offloading textures to system memory. It also offers a hybrid mode where both the CPU and GPU work together, optimising performance across both processors.

with multiple GPUs, but in order to take advantage you’ll need a workstation with adequate power and sufficient PCIe slots to accommodate the GPUs.

GPU memory

The amount of memory a GPU has is often more critical than its processing power. In some software, running out of GPU memory can cause crashes or significantly slow down performance. This happens because the GPU is forced to borrow system memory from the workstation via the PCIe bus, which is much slower than accessing its onboard memory.

The impact of insufficient GPU memory depends on your workflow. For final renders, it might simply mean waiting longer for images or videos to finish processing. However, in a real-time viewport, running out of memory can make navigation nearly impossible. In extreme cases, we’ve seen frame rates plummet to 1-2 FPS, rendering the scene completely unworkable.

Fortunately, GPU memory and

processing power usually scale together. Professional workstation GPUs, such as Nvidia RTX or AMD Radeon Pro, generally offer significantly more memory than their consumer-grade counterparts like Nvidia GeForce or AMD Radeon. This is especially noticeable at the lower end of the market. For example, the Nvidia RTX 2000 Ada, a 70W GPU, is equipped with 16 GB of onboard memory.

For real-time visualisation workflows, we recommend a minimum of 16 GB, though 12 GB can suffice for laptops. Anything less could require compromises, such as simplifying scenes and textures, reducing display resolution, or lowering the quality of exported renders.

CPU processing

CPU rendering was once the standard for most arch viz workflows, but today it often plays second fiddle to GPU rendering. That said, it remains critically important for certain software. Chaos Corona, a specialist tool for arch viz, relies entirely on the CPU for rendering. Meanwhile,

Chaos V-Ray gives users the flexibility to choose between CPU and GPU. Some still favour the CPU renderer for its greater control and the ability to harness significantly more memory when paired with the right workstation hardware. For example, while the top-tier Nvidia RTX 6000 Ada Generation GPU comes with an impressive 48 GB of on-board memory, a Threadripper Pro workstation can support up to 1 TB or more of system memory.

CPU renderers scale exceptionally well with core count — the more cores your processor has, the faster your renders. However, as core counts increase, frequencies drop, so doubling the cores won’t necessarily cut render times in half. Take the 96-core Threadripper Pro 7995WX, for example. It’s a powerhouse that’s the ultimate dream for arch viz specialists. But does it justify its price tag—nearly 20 times that of the 16-core AMD Ryzen 9950X—for rendering performance that’s only 3 to 4 times faster? As arch viz becomes more prevalent across AEC firms, that’s a tough call for many.

Corona 10
Benchmark scene
V-Ray 6.0 CPU render
V-Ray 6.0 GPU RTX render

workstation

D5 Render is a real-time arch viz tool, based on Unreal Engine. Its ray tracing technology is built on DXR, requiring a GPU with dedicated ray-tracing cores from Nvidia, Intel, or AMD.

The software uses Nvidia DLSS, allowing Nvidia GPUs to boost real time performance. Multiple GPUs are not supported.

The benchmark uses 4 GB of GPU memory, so all GPUs are compared on raw performance alone. Real time scores are capped at 60 FPS.

Enscape is a very popular tool for real-time arch viz. It supports hardware ray tracing, and also Nvidia DLSS, but not the latest version.

For testing we used an older version of Enscape (3.3). This had some incompatibility issues with AMD GPUs, so we limited our testing to Nvidia. Enscape 4.2,

Lumion is a real-time arch viz tool known for its exterior scenes in context with nature.

The software will benefit from a GPU with hardware raytracing, but those with older GPUs can still render with rasterisation.

Our test scene uses 11 GB of GPU memory, which meant the 8 GB GPUs struggled. The Nvidia RTX A1000 slowed down, while the AMD Radeon Pro W7500 & W7600 caused crashes. The high-end AMD GPUs did OK against Nvidia, but slowed down in ray tracing.

memory, massively slowing down the 8 GB GPUs. The 8 GB AMD cards caused the software to crash with the Path Tracer. The high-end AMD GPUs did OK against Nvidia but were well off the pace in path tracing.

the latest release, supports AMD. We focused on real time performance, rather than time to render. The gap between the RTX 5000 Ada and RTX 6000 Ada was not that big. Our dataset uses 11 GB of GPU memory, which caused the software to crash when using the Nvidia RTX A1000 (8GB).

Snowdon Tower Revit sample project
Enscape 3.3 School sample project
Lumion Pro 2024

Allies and Morrison

Architype

Aros Architects

Augustus Ham

Consulting

B + R Architects

Cagni Williams

Coffey Architects

Corstorphine & Wright

Cowan Architects

Cullinan Studio

DRDH

Eight Versa

Elevate Everywhere 5plus

Flanagan Lawrence Focus on Design Gillespies

GRID Architects

Grimshaw

Hawkins/Brown

HLM Architects

Hopkins Architects

Hutchinson & Partners

John McAslan & Partners

Lyndon Goode

Architects

Makower Architects

Marek Wojciechowski Architects

Morris + Company

PLP Architecture

Plowman Craven

Rolfe Judd

shedkm

Studio Egret West

Via

Weston Williamson + Partners

Why are so many organisations adopting our virtual workstations?

High performance

with dedicated NVIDIA GPUs and AMD Threadripper CPUs we provide workstation level performance for the most demanding users

More sustainable

our vdesks are 62% less carbon impactful than a similarly specified physical workstation

More secure

centralised virtual resources are easier to secure than dispersed infrastructure

More efficient deployment and management is vastly quicker than with a physical estate

More

agile our customers are better able to deal with incoming challenges and opportunities

Cost accessible

we are much less expensive and much more transparent than other VDI alternatives

www.inevidesk.com info@inevidesk.com

GPUs for Stable Diffusion

Architects and designers are increasingly using text-to-image AI models like Stable Diffusion. Processing is often pushed to the cloud, but the GPU in your workstation may already be perfectly capable, writes Greg Corke

Stable Diffusion is a powerful textto-image AI model that generates stunning photorealistic images based on textual descriptions. Its versatility, control and precision have made it a popular tool in industries such as architecture and product design.

One of its key benefits is its ability to enhance the conceptual design phase. Architects and product designers can quickly generate hundreds of images, allowing them to explore different design ideas and styles in a fraction of the time it would take to do manually.

Stable Diffusion relies on two main processes: inferencing and training. Most architects and designers will primarily engage with inferencing, the process of generating images from text prompts. This can be computationally demanding, requiring significant GPU power. Training is even more resource intensive. It involves creating a custom diffusion model, which can be tailored to match a specific architectural style, client preference, product type, or brand. Training is often handled by a single expert within a firm.

There are several architecture-specific tools built on top of Stable Diffusion or other AI models, which run in a browser or handle the computation in the cloud. Examples include AI Visualizer (for Archicad, SketchUp, and Vectorworks), Veras, LookX AI, and CrXaI AI Image Generator. While these tools simplify access to the technology, and there are

many different ways to run vanilla Stable Diffusion in the cloud, many architects still prefer to keep things local.

Running Stable Diffusion on a workstation offers more options for customisation, guarantees control over sensitive IP, and can turn out cheaper in the long run. Furthermore, if your team already uses real-time viz software, the chances are they already have a GPU powerful enough to handle Stable Diffusion’s computational demands.

While computational power is essential for Stable Diffusion, GPU memory plays an equally important role. Memory usage in Stable Diffusion is impacted by several factors, including:

• Resolution: higher res images (e.g. 1,024 x 1,024 pixels) demand more memory compared to lower res (e.g. 512 x 512).

• Batch size: Generating more images in parallel can decrease time per image, but uses more memory.

• Version: Newer versions of Stable Diffusion (e.g. SDXL) use more memory.

• Control: Using tools to enhance the model’s functionality, such as LoRAs for fine tuning or ControlNet for additional inputs, can add to the memory footprint.

For inferencing to be most efficient, the entire model must fit into GPU

memory. When GPU memory becomes full, operations may still run, but at significantly reduced speeds as the GPU must then borrow from the workstation’s system memory, over the PCIe bus.

This is where professional GPUs can benefit some workflows, as they typically have more memory than consumer GPUs. For instance, the Nvidia RTX A4000 professional GPU is roughly the equivalent of the Nvidia GeForce RTX 3070, but it comes with 16 GB of GPU memory compared to 8 GB on the RTX 3070.

Inferencing performance

To evaluate GPU performance for Stable Diffusion inferencing, we used the UL Procyon AI Image Generation Benchmark. The benchmark supports multiple inference engines, including Intel OpenVino, Nvidia TensorRT, and ONNX runtime with DirectML. For this article, we focused on Nvidia professional GPUs and the Nvidia TensorRT engine. This benchmark includes two tests utilising different versions of the Stable Diffusion model — Stable Diffusion 1.5, which generates images at 512 x 512 resolution and Stable Diffusion XL (SDXL), which generates images at 1,024 x 1,024. The SD 1.5 test uses 4.6 GB of GPU memory, while the SDXL test uses 9.8 GB. In both tests, the UL Procyon benchmark generates a set of 16 images, divided into batches. SD 1.5 uses a batch size of 4, while SDXL uses a batch size of 1. A higher

benchmark score indicates better GPU performance. To provide more insight into real-world performance, the benchmark also reports the average image generation speed, measured in seconds per image. All results can be seen in the charts below.

Key takeaways

It’s no surprise that performance goes up as you move up the range of GPUs, although there are diminishing returns at the higher-end. In the SD 1.5 test, even the RTX A1000 delivers an image every 11.7 secs, which some will find acceptable.

The RTX 4000 Ada Generation GPU

Stable Diffusion architectural images courtesy of James Gray. Image above and right generated with ModelMakerXL, a custom trained LoRA by Ismail Seleit. Recently, Gray has been exploring Flux, a next-generation image and video generator. He recommends a 24 GB GPU. Follow Gray @ www.linkedin.com/in/ james-gray-bim

looks to be a solid choice for Stable Diffusion, especially as it comes with 20 GB of GPU memory. The Nvidia RTX 6000 Ada Generation (48 GB) is around 2.3 times faster, but considering it costs almost six times more (£6,300 vs £1,066) it will be hard to justify on those performance metrics alone.

The real benefits of the higher end cards are most likely to be found in workflows where you can exploit the extra memory. This includes handling larger batch sizes, running more complex models, and, of course, speeding up training.

Perhaps the most revealing test result

Procyon AI Image Generation Benchmark results

comes from SDXL, as it shows what can happen when you run out of GPU memory. The RTX A1000 still delivers results, but its performance slows drastically. Although it’s just 2 GB short of the 10 GB needed for the test, it takes a staggering 13 minutes to generate a single image — 70 times slower than the RTX 6000 Ada.

Of course, AI image generation technology is moving at an incredible pace. Tools including Flux, Runway and Sora can even be used to generate video, which demands even more from the GPU. When considering what GPU to buy now, it’s essential to plan for the future.

Stable Diffusion

Z by HP Boost: GPUs on demand

With HP’s new solution, workstation GPUs become shareable across the network, helping firms get the most out of their IT resources for AI training and inferencing, writes Greg Corke

Boosting your workstation’s performance by tapping into shared resources is nothing new.

Distributed rendering, through applications like V-Ray and KeyShot, allows users to harness idle networked computers for faster processing.

Z by HP Boost is a new take on this idea, with a specific focus on AI. The technology is primarily designed to deliver GPU power to those who need it, on-demand, by giving remote access to idle GPUs on the network. In short, it can turn a standard PC or laptop into a powerful GPUaccelerated workstation, extending the reach of AI to a much wider audience, and dramatically reduce processing time.

HP is primarily pitching Z by HP Boost at data scientists and AI developers for training or fine-tuning large language models (LLMs). However, Z by HP Boost is also well suited to inferencing, the application of the trained model to generate new results.

“We want companies, like architects, to both create their AI, fine tune their models, create custom models — those are big projects — but also create with AI, with the diffusion programs,” says Jim Nottingham, SVP & division president personal systems advanced compute and solutions, HP.

ing visuals based on an existing composition, such as a sketch or a screen grab of a CAD or BIM model.

To get the most out of Stable Diffusion design and architecture firms often finetune or create custom models tailored to specific styles. Training models is highly computationally demanding and is typically handled by a specialist within the firm. This person may already have access to a powerful workstation, equipped with multiple high-end GPUs. However, if that’s not the case, or they need more GPU power to accelerate a process that can take days, Z by HP Boost could be used to do the heavy lifting.

Inferencing in Stable Diffusion, where a pre-trained AI model is used to generate new images, is applicable to a much wider audience. While less computationally demanding than training, inferencing still needs serious GPU power, especially in terms of GPU memory, which often goes beyond what’s available in the GPUs typically used for CAD and BIM modelling in tools like Solidworks and Autodesk Revit.

given that Stable Diffusion is used mainly during the early design phases, meaning high-powered GPUs might be massively underutilised for most of the year.

Even if a local entry-level GPU does work with Stable Diffusion, generating an image can take several minutes (as demonstrated on page WS30 ). But with a high-end GPU like the Nvidia RTX 6000 Ada Generation this can be done in seconds. During the early design phase — especially when collaborating with clients and project teams — this speed advantage can be hugely beneficial, allowing for rapid iteration.

How Z by HP Boost works Firms can designate any number of GPUs on their network to be shared. This could be four high-performance Nvidia RTX 6000 Ada Generation or Nvidia A800 GPUs in a dedicated highend workstation like the HP Z8 Fury G5, or a single Nvidia RTX 2000 Ada Generation GPU in a compact system like the HP Z2 Mini G9. The only

Z by HP Boost makes it easier for more users to tap into this power without needing to equip everyone with a super-

charged workstation.

AI image generation

Z by HP Boost can be used for many different AI workflows. It currently supports PyTorch and TensorFlow, two of the most widely used open-source deep learning frameworks.

In AEC and product development, one of the most interesting use cases is Stable Diffusion, an AI image generator that can be used for early-stage design ideation. The AI model can be used to rapidly generate images –photorealistic or stylised – from a simple prompt. It can also serve as a shortcut for traditional rendering, generat-

particularly valuable,

Having access to GPUs on-demand is particularly valuable,

requirement is that the GPUs are housed in an HP Z Workstation.

Firms may choose to set aside one or more dedicated GPU workstations as a shared resource. Alternatively, to make the most out of the sometimes-vast numbers of GPUs scattered throughout an organisation, they can add GPUs from the workstations of end users. Those GPUs don’t have to be completely idle; they can also be shared when the owner is only doing light tasks. As Nvidia GPUs and drivers are good at multitasking it’s feasible, in theory, to model in CAD or BIM while someone else sets the same GPU to work in Stable Diffusion.

The Z by HP Boost software is installed on both the client and host machines. There are no restrictions on the client device — the PC or laptop just needs to run either Windows or Linux.

It’s very easy to configure a GPU for sharing. On the host device, simply select a GPU and assign it to the appropriate pool. Once that’s done, anyone with the necessary permissions has access. All they must do is choose the GPU from a list and select the application they want to run.

Once they’ve grabbed a GPU, it’s essentially theirs until they release it. However, the owner of the host machine always retains the right to reclaim the GPU if they want.

To ensure resources are used efficiently, GPUs are automatically returned to the pool after a period of inactivity. The default timeout is four hours, but this can be changed. A warning will appear on the

client device before the GPU is reallocated.

If the host workstation has multiple GPUs inside, each can be assigned to a different user. Currently, it’s one remote user per GPU, but there are plans for GPU slicing, which will enable multiple users to share the power of a single GPU simultaneously.

IT managers can configure the sharing however they want and, as Nottingham explains, this process can be aided by monitoring how resources are used. “We would like to work with customers to profile what’s their typical usage and design their sharing pool based on that usage.

“And maybe they can change it over time – they set up this one for night-time, they set up this one for daytime, or this one for Wednesdays – there’s going to be a lot of flexibility that we deliver.”

Nottingham believes Z by HP Boost is most interesting when multiple workstations are connected – many to many. “You just create a fabric, so you have more [GPUs] available, all the time.” This, he says, gives you a big performance boost without having to double your fleet.

Z by HP Boost doesn’t have to be used locally. As many of the AI workflows are not sensitive to latency it also works OK remotely. However, the ideal solution for remote working, as Nottingham explains, is with remote graphics software HP Anyware. In theory, one could have an architect or engineer remoting into a HP Z2 Mini in the office for bread-and-butter CAD or BIM work, who could then use Z by HP Boost to access an idle GPU on the same network to run Stable Diffusion.

Our thoughts

Z by HP Boost offers an interesting proposition for design and engineering firms looking to roll out AI tools like Stable Diffusion to a broader audience.

By providing on-demand access to high-performance workstation GPUs, it allows firms to efficiently maximise their resources, utilising hardware that might otherwise sit idle under a desk, especially at night.

The alternative is equipping everyone with high-end GPUs or running everything in the cloud. Both options are expensive and cloud can also bring unpredictable costs.

Keeping things local also helps firms protect intellectual property, keeping proprietary designs and the models that are trained on their proprietary designs behind the firewall.

Additionally, Z by HP Boost enables teams to pool resources for AI development, offering a flexible solution for demanding projects.

Although Z by HP Boost is currently focused on AI, we see no reason why it couldn’t be used for other GPU-intensive tasks, such as reality modelling, simulation, or rendering. The absence of ‘AI’ in the product’s name may even suggest that this broader use is on the roadmap.

However, this would require buy-in from each software developer and could become complicated for workflows typically handled by dedicated clusters with fast interconnects.

It will be very interesting to see how this technology develops.

HP presenting Z by HP Boost at the HP Imagine event last year, showing a remote Nvidia RTX 6000 Ada Generation GPU accelerating Stable Diffusion

Workstations for reality modelling

What’s the best CPU, memory and GPU to process complex reality modelling data? Greg Corke tests some of the latest workstation technology in Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture from Epic Games

Reality modelling is one of the most computationally demanding workflows in Architecture, Engineering and Construction (AEC). It involves the creation of digital models of physical assets by processing vast quantities of captured real-world data using technologies including laser scanning, photogrammetry and simultaneous localisation and mapping (SLAM).

Reality modelling has numerous applications, including providing context for new buildings or infrastructure, forming the basis for retrofit projects, or comparing “as-built” with “as-designed” for construction verification.

While there’s a growing trend to process captured data in the cloud, desktop processing remains the preferred method. Cloud can be costly, and uploading vast amounts of data — sometimes terabytes — is a significant challenge, especially when

working from remote construction sites with poor connectivity.

Processing reality capture data can take hours, making it essential to select the right workstation hardware. In this article, we explore the best processor, memory and GPU options for reality modelling, testing a variety of workflows in three of the most popular tools — Leica Cyclone 3DR, Leica Cyclone Register 360, and RealityCapture by Capturing Reality, a subsidiary of Epic Games.

Most AEC firms have tight hardware budgets and it’s easy to spend money in the wrong places, sometimes for very little gain. In some cases, investing in more expensive equipment can even slow you down!

Leica Cyclone 3DR

Leica Cyclone 3DR is a multi-purpose reality modelling tool, used for inspection, modelling and meshing. Processing is done

Workstation technology on test

Below is a list of kit we used for testing. All machines were Windows 11 Pro 26100.

Armari Magnetar workstation with AMD Ryzen 7 9800X3D CPU (8 cores), 96 GB DDR5 5,600 MT/s memory and AMD Radeon Pro W7500 GPU (see page WS20).

predominantly on the CPU and several tasks can take advantage of multiple CPU cores. Some tasks, including the use of machine learning for point cloud classification, are also optimised for GPU.

For testing we focused on four workflows: scan-to-mesh, analysis, AI classification and conversion.

Scan-to-mesh: Compared to point clouds, textured mesh models are much easier to understand and easier to share, not least because the files are much smaller.

In our ‘scan-to-mesh’ test, we record the time it takes to convert a dataset of a building — captured with a Leica BLK 360 scanner — into a photorealistic mesh model. The dataset comprises a point cloud with 129 million points and accompanying images.

The process is multi-threaded but, as with many reality capture workflows,

Scan 3XS workstation with AMD Ryzen 9 9950X CPU (16 cores), 64 GB DDR5 5,600 MT/s memory or 128 GB DDR5 3,600 MT/s memory and Nvidia RTX 4500 Ada Generation GPU (see page WS16).

Scan 3XS workstation with Intel Core Ultra 9 285K CPU (8 P-cores and 16 E-cores), 64 GB DDR5 5,600 MT/s memory and Nvidia RTX 2000 Ada Generation GPU (see page WS17).

RTX A6000 GPU (see www. aecmag.com/workstations/ review-hp-z6-g5-a).

HP Z6 G5A workstation with AMD Threadripper Pro 7975WX CPU (32 cores), 128 GB DDR5 5,200 MT/s memory and Nvidia

Comino Grando workstation with overclocked AMD Threadripper Pro 7995WX CPU (96 cores), 256 GB DDR5 4,800 MT/s memory and Nvidia RTX

6000 Ada Generation GPU. (see page WS22).

We also tested a range of GPUs, including the Nvidia RTX A1000 (8 GB), RTX A4000 (16 GB), RTX 2000 Ada (16 GB), RTX 4000 Ada (20 GB), RTX 4500 Ada (24 GB) and RTX 6000 Ada (48 GB).

more CPU cores does not necessarily mean faster results. Other critical factors that affect processing time include the amount of CPU cache (a high-speed onchip memory for frequently accessed data), memory speed, and AMD Simultaneous Multithreading (SMT), a technology similar to Intel Hyper-Threading that enables a single physical core to execute multiple threads simultaneously. During testing, system memory usage peaked at 25 GB, which meant all test machines had plenty of capacity.

The most unexpected outcome was the 8-core AMD Ryzen 7 9800X3D outperforming all its competitors. It not only beat the 16-core AMD Ryzen 9 9950X and Intel Core Ultra 9 285K (8 performance cores and 16 efficient cores), but the multicore behemoths as well. With the 96core AMD Threadripper Pro 7995WX it appears to be a classic case of “too many cooks [cores] spoil the broth”!

The AMD Ryzen 7 9800X3D is a specialised consumer CPU, widely considered to be the fastest processor for 3D gaming thanks to its advanced 3D V-Cache technology. It boasts 96 MB of L3 cache, significantly more than comparative processors. This allows the CPU to access frequently-used data quicker, rather than having to pull it from slower system memory (RAM).

But we expect that having lots of fast cache is not the only reason why the AMD Ryzen 7 9800X3D comes out top in our

scan-to-mesh test – after all, Threadripper Pro is also well loaded, with the top-end 7995WX having 384 MB of L3 cache which is spread across its 96 cores. To achieve a high number of cores, modern processors are made up of multiple chiplets or CCDs. In the world of AMD, each CCD typically has 8 cores, so a 16core processor has two CCDs, a 32-core processor has four CCDs, and so on.

Communication between cores in different CCDs is inherently slower than cores within the same CCD, and since the AMD Ryzen 7 9800X3D is made up of a single CCD that has access to all that L3 cache, we expect this gives it an additional advantage. It will be interesting to see how the recently announced 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D compare. Both processors feature 128 MB of L3 cache and comprise two CCDs.

Simultaneous Multithreading (SMT) also has an impact on performance. With the AMD Ryzen 9 9950X, for example, disabling SMT in the BIOS cut processing time by as much as 15%. However, it had the opposite effect with the AMD Ryzen 7 9800X3D, increasing processing time by 32%.

Memory speed also has an impact on performance. The AMD Ryzen 9 9950X processor was around 7% slower when configured with 128 GB RAM running at 3,400 MT/sec than it was with 64 GB RAM running at the significantly faster 5,600 MT/sec.

Analysis: In our analysis test we compare a point cloud to a BIM model, recording the time it takes to calculate a colour map that shows the deviations between the two datasets. During testing, system memory usage peaked at 19 GB.

The process is multi-threaded, but certain stages only use a few cores. As with scan-to-mesh, more CPU cores does not necessarily mean faster results, and CPU cache, SMT and memory speed also play an important role. Again, the AMD Ryzen 7 9800X3D bagged first spot, completing the test 16% faster than its closest rival, the Intel Core Ultra 9 285K.

The big shock came from the 16-core AMD Ryzen 9 9950X, which took more than twice as long as the 8-core AMD Ryzen 7 9800X3D to complete the test. The bottleneck here is SMT, as disabling it in the BIOS, so each of the 16 cores only performs one task at a time, slashed the test time from 91 secs to 56 secs.

Getting good performance out of the Threadripper Pro processors required even more tuning. Disabling SMT on its own had a minimal impact, and it was only when the Cyclone 3DR executable was pinned to a single CCD (8 cores, 16 threads) that times came down. But this level of optimisation is probably not practical, not least because all workflows and datasets are different.

AI classification: Leica Cyclone 3DR features an AI-based auto-classification algorithm designed to ‘intelligently

Reality modelling data comes from multiple sources: the Leica BLK ARC autonomous laser scanning module riding steady on the Boston Dynamics Spot robot

classify’ point cloud data. The machine learning model has been trained on large amounts of terrestrial scan data and comes with several predefined models for classification.

The process is built around Nvidia CUDA and therefore requires an Nvidia GPU. However, the CPU is still used heavily throughout the process. We tested a variety of Nvidia RTX professional GPUs using an AMD Ryzen 9 9950X-based workstation with 64 GB of DDR5 memory.

The test records the time it takes to classify a point cloud of a building with 129 million points using the Indoor Construction Site 1.3 machine learning model. During testing, system memory usage peaked at 37 GB and GPU memory usage at a moderate 3 GB.

The big takeaway from our tests is that the CPU does the lion’s share of the processing. The Nvidia RTX GPU is essential, but only contributes modestly to the overall time. Indeed, there was very little difference between most of the Nvidia RTX GPUs and even the entry-level Nvidia RTX A1000 was only 22% slower than the significantly more powerful Nvidia RTX 4500 Ada.

Conversion: This simple test converts a Leica LGSx file into native Cyclone 3DR. The dataset comprises a point cloud of a highway alignment with 594 million points. During testing, system memory usage peaked at 11 GB.

As this process is largely single threaded it’s all about single core CPU performance. Here, the Intel Core Ultra 9 285K takes first place, closely followed by the AMD Ryzen 9 9950X in second. With a slightly slower peak frequency the AMD Ryzen 7 9800X3D comes in third. In this case, the

larger L3 cache appear to offer no benefit.

The Threadripper Pro 7975WX and Threadripper Pro 7995WX lag behind — not only because they have a lower frequency, but are based on AMD’s older ‘Zen 4’ architecture, so have a lower Instructions Per Clock (IPC).

Leica Cyclone Register 360

Leica Cyclone Register 360 is specifically designed for point cloud registration, the process of aligning and merging multiple point clouds into a single, unified coordinate system.

For testing, we used a 99 GB dataset of the Italian Renaissance-style ‘Breakers’ mansion in Newport, Rhode Island. It includes a total of 39 setups from a Leica RTC360 scanner, around 500 million points and 5K panos. We recorded the time it takes to import and register the data.

The process is multi-threaded, but to ensure stability the software allocates a specific number of threads depending on how much system memory is available. In 64 GB systems, the software allocates five threads while for 96 GB+ systems it’s six.

The Intel Core Ultra 9 285K processor led by some margin, followed by the 16core AMD Ryzen 9 9950X and 96-core Threadripper Pro 7995WX. Interestingly, this was the one test where the 8-core AMD Ryzen 7 9800X3D was not one of the best performers. However, as the GPU does a small amount of processing, and Leica Cyclone Register 360 has a preference for Nvidia GPUs, this could be attributed to the workstation having the entry-level AMD Radeon Pro W7500 GPU.

Notably, memory speed appears to play a crucial role in performance. The AMD Ryzen 9 9950X, configured with 128 GB of 3,400 MT/sec memory, was able to utilise six threads for the process, but was 20%

slower than when configured with 64 GB of faster 5,600 MT/sec memory, which only allocated five threads.

RealityCapture from Epic Games

RealityCapture, developed by Capturing Reality — a subsidiary of Epic Games — is an advanced photogrammetry software designed to create 3D models from photographs and laser scans. Most tasks are accelerated by the CPU, but there are certain workflows that also rely on GPU computation.

Image alignment in RealityCapture refers to the process of analysing and arranging a set of photographs or scans in a 3D space, based on their spatial relationships. This step is foundational in photogrammetry workflows, as it determines the relative positions and orientations of the cameras or devices that captured the input data.

We tested with two datasets scanned by R-E-A-L.iT, Leo Films, Drone Services Canada Inc, both available from the RealityCapture website.

The Habitat 67 Hillside Unreal Engine sample project features 3,199 images totalling 40 GB, 1,242 terrestrial laser scans totalling 90 GB, and uses up 60 GB of system memory during testing.

The Habitat 67 Sample, a subset of the larger dataset, features 458 images totalling 3.5 GB, 72 terrestrial laser scans totalling 3.35 GB, and uses up 13 GB of system memory.

The 32-core Threadripper Pro 7975WX took top spot in the large dataset test, with the AMD Ryzen 9 9950X, AMD Ryzen 7 9800X3D and 96-core AMD Threadripper Pro 7995WX not that far behind. Again, SMT needed to be disabled in the higher core count CPUs to get the best results.

The Habitat 67 Hillside Unreal Engine sample project in RealityCapture from Epic Games

workstation

Memory speed appears to have a huge impact on performance. The AMD Ryzen 9 9950X processor was around 40% slower when configured with 128 GB of RAM running at 3,400 MT/sec than it was with 64 GB running at the significantly faster 5,600 MT/sec.

Import laser scan: This process imports a collection of E57 format laser scan data and converts it into a RealityCapture point cloud with the .lsp file extension. Our test used up 13 GB of system memory.

Since this process relies heavily on single-threaded performance, single-core speed is what matters most. The Intel Core Ultra 9 285K comes out on top, followed closely by the AMD Ryzen 9 9950X. With

a slightly lower peak frequency, the AMD Ryzen 7 9800X3D takes third place. The Threadripper Pro 7975WX and 7995WX fall behind, not just due to lower clock speeds but also because they’re built on AMD’s older Zen 4 architecture, which has a lower Instructions Per Clock (IPC).

Reconstruction is a very compute intensive process that involves the creation of a watertight mesh. It uses a combination of CPU and Nvidia GPU, although there’s also a ‘preview mode’ which is CPU only.

For our testing, we used the Habitat 67 Sample dataset at ‘Normal’ level of detail. It used 46 GB of system memory and 2 GB of GPU memory.

With a variety of workstations with different processors and GPUs, it’s hard to pin down exactly which processor is best for this workflow — although the 96-core Threadripper Pro 7995WX workstation with Nvidia RTX 6000 Ada GPU came out top. To provide more clarity on GPUs, we tested a variety of add-in boards in the same AMD Ryzen 9 9950X workstation. There was relatively good performance scaling across the mainstream Nvidia RTX range.

Thoughts on processors / memory

The combination of AMD’s ‘Zen 5’ architecture, fast DDR5 memory, a single chiplet design, and lots of 3D V-Cache, looks to make the AMD Ryzen 7 9800X3D

Capturing Reality 1.5

processor a very interesting option for a range of reality modelling workflows — especially for those on a budget. The AMD Ryzen 7 9800X3D becomes even more interesting when you consider that it’s widely regarded to be for gamers. The chip is not offered by any of the major workstation OEMs — only specialist system builders like Armari.

However, before you rush out and part with your hard-earned cash, it is important to understand a few things.

1) The AMD Ryzen 7 9800X3D processor currently has a practical maximum capacity of 96 GB, if you want fast 5,600 MT/sec memory. This is an important consideration if you work with large datasets. If you run out of memory, the processor will have to swap data out to the SSD, which will likely slow things down considerably.

The AMD Ryzen 9 9800X3D can support up to 192 GB of system memory, but it will need to run at a significantly slower speed (3,600 MT/sec). And as our tests have shown, slower memory can have a big impact on performance.

2) AMD recently announced two additional ‘Zen 5’ 3D V-Cache processors. It will be interesting to see how they compare. The 12-core Ryzen 9 9900X3D and 16-core Ryzen 9 9950X3D both have slightly more L3 cache (128 MB) than the 8-core Ryzen 7 9800X3D (96 MB). However, they are made up of two separate chiplets (CCDs), so communication between the cores in different CCDs could slow things down.

3) Most of the reality models we used for testing are not that big, with the exception of the Habitat 67 dataset, which we used to test certain aspects of RealityCapture. Larger datasets require more memory. For example, reconstructing the full Habitat 67 RealityCapture dataset on the 96-core Threadripper Pro 7995WX workstation used 228 GB of system memory at peak, out of the 256 GB in the machine - and took more than half a day to process. Workstations with less system memory will likely have to push some of the data into temporary swap space on the SSD. Admittedly, as modern PCIe NVMe SSDs offer very fast read-write performance, this is not necessarily the colossal

bottleneck it used to be when you had to swap out data to mechanical Hard Disk Drives (HDDs).

4) Multi-tasking is often important for reality modelling, as the processing of data often involves several different stages from several different sources. At any given point you may need to perform multiple operations at the same time, which can put a massive strain on the workstation. As the AMD Ryzen 7 9800X3D processor has only 8-cores and is effectively limited to 96 GB of fast system memory, if you throw more than one task at the machine at a time things will likely slow down considerably. Meanwhile Threadripper Pro is much more scalable as there are processors with 12-to 96-cores, and the platform supports

you work with, and the complexity of your workflows. For lighter tasks, the AMD Ryzen 7 9800X3D looks to be an excellent budget choice, but for more complex projects, especially those that require multi-tasking, Threadripper Pro should deliver a much more flexible and performant platform. Of course, you still need to choose between the different models, which vary in price considerably and, as we have found in some of our tests, fewer cores is sometimes better.

Thoughts on GPUs

‘‘ Two of our test workflows rely on Nvidia GPUs, but because they share some of the workload with the CPU, the performance gains from more powerful GPUs are less pronounced compared to entirely GPU-driven tasks like ray trace rendering

up to 2 TB of DDR5-5200 ECC memory. For a crude multi-tasking test, we performed two operations in parallel — alignment in RealityCapture and meshing in Leica Cyclone 3DR. The Threadripper Pro 7995WX workstation completed both tests in 200 secs, while AMD Ryzen 7 9800X3D came in second in 238 secs. We expect this lead would grow with larger datasets or more concurrent processing tasks.

In summary, your choice of processor will depend greatly on the size of datasets

Two of our tests — Reconstruction in RealityCapture and AI classification in Leica Cyclone 3DR — rely on Nvidia GPUs. However, because these processes share some of the workload with the CPU, the performance gains from more powerful GPUs are less pronounced compared to entirely GPU-driven tasks like ray trace rendering. There’s a significant price gap between the Nvidia RTX A1000 (£320) and the Nvidia RTX 6000 Ada Generation (£6,200). For reconstruction in RealityCapture, investing in the higher-end model is probably easier to justify, as our tests showed computation times could be cut in two. However, for AI classification in Leica Cyclone 3DR, the performance gains are much smaller, and there seem to be diminishing returns beyond the Nvidia RTX 2000 Ada Generation. Whilelargerdatasetsmaydelivermore substantial benefits, GPU memory — a keyadvantageofthehigher-endcards— appearstobelesscrucial.

Point cloud in Leica Cyclone 3DR

workstation special report

Workstation news

Reshuffle spells end for Dell Precision workstation brand

Dell has simplified its product portfolio, with the introduction of three new PC categories – Dell for ‘play, school and work’, Dell Pro for ‘professional-grade productivity’ and Dell Pro Max ‘for maximum performance’

The rebranding spells an end to the company’s long-standing Precision workstation brand, which will be replaced by Dell Pro Max. It also signals a move away from the term “workstation”. On Dell’s website “workstation” appears only in fine print, as the company now favours high-performance, professionalgrade PC when describing Dell Pro Max.

To those outside of Dell, however, Dell Pro Max PCs are unmistakably workstations, with ISV certification and traditional workstation-class components, including AMD Threadripper Pro processors, Nvidia RTX graphics, highspeed storage, and advanced memory.

Dell has also simplified the product tiers within each of the new PC categories. Starting with the Base level, users can upgrade to the Plus tier for more scalable performance or the Premium tier, which Dell describes as delivering the ultimate in mobility and design.

“We want customers to spend their valuable time thinking about workloads they want to run on a PC, the use cases they’re trying to solve a problem for, not what sub brand, not understanding and figuring out our nomenclature, which at times, has been a bit confusing,” said Jeff Clarke, vice chairman and COO, Dell.

To coincide with the rebrand, Dell has introduced two new base level mobile workstations – the Dell Pro Max 14 and 16 – built around Intel Core Ultra 9 (Series 2) processors and Nvidia RTX GPUs. The full portfolio with the Plus and Premium tier, including AMD options, will follow.

■ www.dell.com

Lenovo powers new workstation service

MSCAD Services has launched WaaS, a ‘Workstation as a Service’ offering, in partnership with Lenovo workstations and Equinix Data Centres.

The global service comprises private cloud solutions and rentable workstations, on a per user, per month basis. Contracts run from one to 36 months.

According to IMSCAD, the service is up to 40% cheaper than high-end instances from the public cloud, and the

workstations perform faster. Users get a 1:1 connection to a dedicated workstation featuring a CPU up to 6.0 GHz and a GPU with up to 24 GB of VRAM.

“Public cloud pricing is far too high when you want to run graphical applications and desktops,” said CEO Adam Jull. “Our new service is backed by incredible Lenovo hardware and the best remoting software from Citrix, Omnissa (formally VMware Horizon) and TGX to name a few.”

■ www.imscadservices.com

Nvidia unveils ‘Blackwell’ RTX GPUs

N

vidia has the consumer-focused RTX 50-Series line up of Blackwell GPUs.

The flagship GeForce RTX 5090 comes with 32 GB of GDDR7 memory, which would suggest that professional Blackwell Nvidia RTX boards, which are expected to follow soon, could go above the current max of 48 GB of the Nvidia RTX 6000 Ada Gen. ■ www.nvidia.com

HP to launch 18-inch mobile workstation

HP is gearing up for the Spring 2025 launch of its first-ever 18-inch mobile workstation, which has been engineered to provide up to 200W TDP to deliver more power for next-generation discrete graphics.

The laptop will feature ‘massive memory and storage’, will be nearly the same size as a 17” mobile workstation and will be cooled by 3x turbo fans and HP Vaporforce Thermals.

■ www.hp.com/z

Nvidia reveals AI workstation

N

vidia has announced Project Digits, a tiny system designed to allow AI developers and researchers to prototype large AI models on the desktop.

The ‘personal AI supercomputer’ is powered by the GB10 Grace-Blackwell, a shrunk down version of the Armbased Grace CPU and Blackwell GPU system-on-a-chip (SoC) .

■ www.nvidia.com

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.