125 minute read

Concept collaboration with Arkio

Next Article
Archicad 25

Archicad 25

WS-1640A-PRO-G4 [Threadripper Pro]

This Threadripper Pro workstation is not for everyone, but looks ideal for applications where memory bandwidth is critical, writes Greg Corke

In just a few years AMD’s Threadripper CPU has become synonymous with high-performance workstations. This is especially true in design viz, where rendering tools like V-Ray, KeyShot and Unreal Engine thrive on the plentiful CPU cores.

But Threadripper is not actually a workstation processor. It’s a ‘consumer’ CPU with buckets of multithreaded performance — far more than your average YouTuber or gamer would ever need.

Even though Threadripper has sold well, AMD knew it needed a dedicated workstation CPU in order to properly address the workstation market (just like Intel has done with Xeon), so in summer 2020 it launched Ryzen Threadripper Pro.

Threadripper Pro shares the same core silicon as Threadripper, but has several features that set it apart from its ‘consumer’ sibling. These include more memory channels (8 vs 4), so it has more memory bandwidth; higher memory capacity (2 TB ECC memory vs 256 GB) so it can support larger datasets; and additional PCIe Gen4 lanes (128 vs 64), so it can support more GPUs and SSDs.

While these features can give Threadripper Pro an advantage in some workflows, the downside is the CPU runs at slightly slower clock speeds than consumer Threadripper with equivalent core counts, both in terms of base and boost frequency. How this equates to realworld performance will depend on the application — whether it’s bottlenecked by memory bandwidth or CPU frequency.

Open competition

Threadripper Pro was originally exclusive to Lenovo in the Lenovo ThinkStation P620 workstation. However, in March 2021 AMD opened up the CPU to everyone, resulting in a plethora of new workstations. One of those machines is the WS-1640A-PRO-G4 from Derby-based Workstation Specialists, which can be configured with a choice of three AMD Ryzen Threadripper Pro CPUs — the 16-core 3955WX, 32-core 3975WX or 64-core 3995WX.

The first thing you notice about the machine is its size. At 240 x 547 x 475 mm, the Fractal Design 7 XL chassis is significantly larger than the ThinkStation P620 (165 x 460 x 440mm).

But there’s a reason for this. Built around the Extended ATX ASUS Pro WS WRX80E-SAGE SE WiFi motherboard, the WS1640A-PRO-G4 can support up to four double height GPUs, twice that of the ThinkStation P620. If you’re into GPU rendering this is an important consideration.

It also means there’s plenty of room for storage expansion. With three on-board PCIe 4.0 M.2 slots and eight SATA ports, you can easily add to our review machine’s storage — a 1 TB Samsung 980 PRO PCIe 4.0 M.2 NVMe SSD and 2 TB Seagate Barracuda 3.5-inch HDD. There’s a total of eight memory slots, all of which need to be populated in order to make the most of the 8-channel memory architecture. Our review machine was fitted with 128 GB (8 x 16 GB) DDR4 3200 GHz, but those who work with huge datasets can go all the way up to 2 TB with 256 GB 2,933 GHz ECC registered modules.

Our review machine came with the 32core Threadripper Pro 3975WX, which has a 3.5 GHz base frequency and a 4.2 GHz boost. It’s well suited to a range of multithreaded workflows, from rendering, photogrammetry and CFD which typically max out all available cores, to FEA and point cloud processing which, while multithreaded, might use fewer cores.

The machine performed well in our rendering tests but was outshone by the 32-core consumer Threadripper Scan workstation we reviewed earlier this year. It was around 7% slower in KeyShot and 12% slower in V-Ray. In applications like these, memory bandwidth is not as important as frequency. In KeyShot, for example, the Threadripper Pro maintained 3.8 GHz on all cores, but Threadripper hit 4.0 GHz.

In point cloud processing software Cyclone Register 360, the gap was much smaller, but Threadripper still had a 1% lead. This could be a case of memory bandwidth and CPU frequency cancelling each other out.

For Threadripper Pro to shine against its consumer counterpart it needs to be used in applications where memory bandwidth is critical, such as CFD or FEA, as it means data can be fed into the CPU much quicker.

Unfortunately, we don’t currently have any engineering simulation software in our testing suite, but we have heard anecdotally that in applications like Ansys Mechanical users might see a significant performance benefit. AMD also told us that when compiling shaders in Unreal Engine, Threadripper Pro has been seen to deliver a 30-40% jump in performance over an overclocked Threadripper CPU. Threadripper Pro’s 8-channel memory should also benefit workflows like video editing and post-production and when running multiple tasks in parallel.

Product spec ■ AMD Ryzen

Threadripper PRO 3975WX CPU (32 cores) (3.5 GHz, 4.2 GHz boost) ■ 128 GB (8 x 16 GB) 3,200 MHz 8-channel

DDR4 memory ■ 1 TB Samsung 980

PRO M.2 PCIe 4.0

NVMe SSD + 2 TB

Seagate Barracuda 3.5-inch HDD Graphics ■ Fractal Design 7 XL chassis (240 x 547 Our review machine came with x 475 mm (WxDxH) two GPUs to test: the AMD ■ Microsoft Windows Radeon Pro W5500 (8 GB) and 10 Pro 64-bit the new Nvidia RTX A6000 ■ 36 Months Premium RTB (48 GB). The difference in hardware warranty performance is huge, but so is ■ with AMD Radeon Pro W5500 (8 GB) £4,899 (Ex VAT) the price and not all workflows need such high levels of graphics ■ with Nvidia RTX processing. We’d recommend A6000 (48 GB) GPU £8,199 (Ex VAT) the AMD Radeon Pro W5500 in workstation workflows like simulation and specialists.com point cloud processing where graphics requirements are quite low, but the GPU still delivers good all round performance and is fully certified for a range of 3D applications. If you’re into real-time visualisation, VR or GPU rendering, the Nvidia RTX A6000 is a phenomenal GPU. And, if you really want to beef up the GPU rendering capabilities, you can pack four of these double width cards inside, something you can only do on very few other workstations. The verdict The WS-1640A-PRO-G4 is an excellent, well-built workstation, ideal for the most demanding of users. But it’s not for everyone. While the superior memory bandwidth of Threadripper Pro should benefit certain workflows like engineering simulation and some aspects of design viz, those that simply do ray trace rendering may still be better off with consumer Threadripper, especially as it’s cheaper. The caveat is, if you work with huge datasets: Threadripper Pro goes all the way up to 2 TB, while Threadripper peaks at 256 GB. In summary, get to know your applications, workflows and the size of your datasets before you invest.

CAD workstation round up

The latest workstations for CAD and BIM-centric workflows: 11th Gen Intel Core (up to 8 cores) and AMD Ryzen 5000 (up to 16 cores) to go beyond 3D design and into the realms of rendering, reality modelling and simulation

1

BOXX Apexx S3

BOXX has built a major part of its business around overclocking and with the BOXX Apexx S3 it permanently boosts 11th Intel Core clock speeds across all eight cores. The Intel Core i7-11700K runs at 5.0 GHz, while the Intel Core i7-11900K runs at 5.3 GHz. As with all BOXX workstations, there’s also a huge focus on build quality, with the custom chassis made from ‘aircraft-grade’ aluminium, offering a strength and rigidity way beyond that of most off-the-shelf cases.

■ boxx.com ■ boxx-tech.co.uk

2

Dell Precision 3450 SFF

The Dell Precision 3450 SFF is a very compact desktop workstation. Measuring a mere 290 x 93 x 293mm it can even be mounted behind a display (pictured left). The ‘small form factor’ chassis does mean a more limited set of processor options, maxing out at the 65W Intel Core i9-11900 or 80W Intel Xeon W-1390. It’s also restricted to entry-level pro GPUs, but the AMD Radeon Pro WX 3200 and Nvidia Quadro P1000 are perfectly suited to 3D CAD and BIM workflows.

■ dell.com/precision

3

Fujitsu Celsius W5011

This 21 litre ‘micro tower’ is not as deep as comparable workstations, as the motherboard and GPU span its entire depth. It offers a choice of 11th Gen Intel Core or Intel Xeon W-1300 processors and can be configured with a massive range of GPUs, from the entry-level CAD-centric Nvidia T400 up to the Nvidia Quadro RTX 5000 (it doesn’t yet offer the new Nvidia RTX A4000 / A5000). Other features include up to 128 GB of memory, multiple drives and tool-less access.

■ fujitsu.com

4

Lenovo ThinkStation P350 Tiny 5 Boston VENOM R41-10NP 6 Broadberry CyberStation SFF

Tiny by name, tiny by nature — this is the smallest workstation on the planet, measuring a mere 37 x 183 x 179mm. However, it still has everything you need for mainstream CAD and BIM workflows, including an 11th Gen Intel Core i9 CPU (8 cores, 5.2 GHz), up to 64 GB of DDR4 3200 MHz memory and a choice of Nvidia P1000 or T600 GPUs. There’s no room for a HDD but with an M.2 PCIe Gen4 NVMe SSD up to 2 TB there’s still plenty of storage. Plus, built in WiFi.

■ lenovo.com/workstations Boston offers a huge variety of desktop workstations in its Venom range, from Intel Core and Intel Xeon, to AMD Threadripper, Threadripper Pro and AMD Epyc. This AMD Ryzen 5000 machine can be fitted with optional liquid cooling for ‘maximum performance and whisper quiet operation’ and matched with up to 128 GB of DDR4 3,200 MHz stock or 4,733 MHz overclocked memory. All of Boston’s workstations are available to lease.

■ boston.co.uk With its 250 x 203 x 367 mm Fractal Design Core 500 chassis, the Broadberry CyberStation SFF is one of the smallest AMD Ryzen 5000 workstations. This, together with the built-in WiFi, make it well suited to home workers. Despite its size it can still take a whole host of pro GPUs, up to the Nvidia RTX A6000. Two RAM slots on the Gigabyte AMD Ryzen X570 I AORUS PRO motherboard mean it’s limited to 64 GB, but that’s still plenty for most CAD-centric workflows.

■ broadberry.co.uk/amd-ryzen-workstations

7 1 Dell Precision 5760 Interpro IPW-R9

As the name suggests, the InterPro IPW-R9 features a choice of 3rd Gen AMD Ryzen 9 CPUs, including the 12-core Ryzen 9 5900X and 16-core Ryzen 9 5950X. To keep clock speeds running as high as possible for longer periods, the UK firm uses a range of Corsair all-inone liquid CPU coolers. Different BIOS profiles can be created for customers, matched to their workflows. For example, to allow higher clock speeds by temporarily sacrificing cores.

8

Novatech ProStation WR7-WX41 9 Overclockers RENDA

The Novatech ProStation WR7-WX41 is built around the AMD Ryzen 5000 Series with a choice of three CPUs — the 8-core 5800X, 12-core 5900X and 16-core 5950X. It’s fully customisable, but not just core components like memory, graphics and storage. Customers can also choose from 15 different CPU coolers (air or liquid) and nine different chassis, from full towers like the Phanteks Enthoo Pro (pictured) to 4U rack mounts like the Chenbro RM41300G. It’s not hard to guess what this UK firm specialises in. The Overclockers RENDA workstation is all about pushing the limits of performance, while maintaining stability. Professional overclocker Ian Parry (aka 8Pack) heads up the R&D, delivering hand-built machines based on each customer’s workflow requirements. With a custom water cooling solution he says he can push the AMD Ryzen 9 5950X to 5.1 GHz on one core and 4.6 GHz on all cores.

10 11

13

12

10

BIMBOX Stryker III

BIMBOX is laser focused on the AEC sector and has extensive experience of Revit, Enscape, Leica Cyclone, V-Ray, Unreal Engine and many others — and, importantly, what makes them tick.

The firm takes overclocking extremely seriously. For its Stryker III workstation it ‘delids’ the Intel Core i9-11900K CPU, taking off the standard heat spreader and mounting its own liquid cooler directly onto the silicon. This brings down the CPU temperature considerably so it can safely run at 5.3 GHz on all cores.

BIMBOX is based in the US, but with the help of Ingram Micros its machines will soon be built, sold and supported in the UK and other countries.

11

HP Z2 G8 SFF

With its 338 x 308 x 100mm chassis, the HP Z2 G8 SFF is slightly bigger than the Dell Precision 3450 SFF (top left) but has the option of more powerful processors. These include the 125W Intel Core i9-11900K and the Nvidia Quadro RTX 3000, which extends the reach of the workstation beyond 3D CAD and BIM and into the realms of entry-level viz.

According to HP, the Z2 G8 SFF offers ‘Unthrottled performance’ thanks to Z’s ‘industry-leading’ thermals that keep the processor and graphics card cool, so they can run at max performance for extended periods of time. ‘Unthrottled performance’ also extends to the new PCIe Gen 4 Samsung PM9A1 SSD.

12

BOXX Apexx Denali

With its Apexx Denali, BOXX was one of the first workstation manufacturers to offer a Ryzen 5000 Series workstation.

It uses the same compact custom ‘aircraft-grade’ aluminium chassis as the Intel-based Apexx S3 and at 174 x 388 x 452mm it’s smaller than most AMD Ryzen 5000 tower workstations. But that doesn’t come at the expense of expandability. The Apexx Denali can house up to two high-end Nvidia RTX, Nvidia GeForce or AMD Radeon Pro GPUs and two 3.5-inch Hard Disk Drives (HDDs).

To keep the processor running at peak frequencies it uses a liquid-cooled closed loop system with a sizeable radiator.

13

Armari Magnetar

The Armari Magnetar V16R-RA850G2-2S is one of the smallest AMD Ryzen 5000 Series workstation out there, measuring a mere 360 x 87 x 400mm. The custom chassis features a high quality Japanese steel frame which was designed in-house by the specialist UK manufacturer.

Unlike many other small form factor workstations, there is no compromise on graphics and the workstation can support one dual slot GPU like the AMD Radeon Pro W6800 or two single slot GPUs like the Nvidia RTX A4000.

Custom fans and a 14cm all-in-one liquid CPU cooler help maintain peak performance, while still preserving quiet operation.

Best lightweight workstation laptops 2021

Our top picks for ultra-portable mobile workstations to take CAD and design visualisation on the road — all under 20mm and most below 2kg

Dell Precision 5760

The Dell Precision 5760 is somewhat unique as it remains the only thin and light 17-inch mobile workstation from a major vendor. It is a replacement for the Dell Precision 5750 but features an enhanced thermal design including dual output fans, vapour chamber and a hidden exhaust venting through the hinge.

Like the 15-inch Dell Precision 5560 (see top right) it features a combination of aluminium and carbon fibre for the chassis and a 94% display to body ratio thanks to the 4-sided InfinityEdge, 16:10 aspect ratio display.

The thin (8.67 mm - 13.15 mm) and light (2.15kg) design means some compromise on graphics with the Nvidia RTX A2000 (4 GB) and Nvidia RTX A3000 (6 GB) being the only options, although the latter is ‘VR Ready’. However, it offers the same broad choice of 45W 11th Gen Intel Core and Xeon CPUs and supports up to 4 TB of PCI Gen4 SSDs and 64 GB of DDR4, 3,200 MHz memory.

HP ZBook Studio G8

The 15.6-inch HP ZBook Studio G8 is HP’s first mobile workstation to offer both pro and consumer graphics options in the same machine, up to the Nvidia RTX A5000 (16 GB) or GeForce 3080 (16 GB). Both GPUs are ideal for design viz, VR and GPU rendering but as the laptop is very slim (17.5mm) we expect the same GPU might run faster in the thicker (22.8mm) HP ZBook Fury G8 15, which should offer better cooling and increased power draw. The HP ZBook Studio G8 offers a choice of 11th Generation Intel Core H-Series processors up to the Intel Core i9-11950H but only up to 32 GB RAM, which might be a little light for some workflows. It also features an optional HP DreamColor display with a 120Hz refresh rate, a billion on-screen colours, 100% DCI-P3, and ‘end-to-end’ colour accuracy with Pantone validation. It starts at 1.79kg.

■ hp.com/z

MSI Creator Z16

MSI’s new pro-focused laptop marks a change in aesthetics for the Taiwanese company. The slimline 16mm chassis is made from CNC-milled aluminium with a ‘Lunar Gray’ finish. It starts at 2.2kg.

With a 16-inch 16:10 aspect ratio display you get a bit more viewing space than the traditional 16:9. ‘True Pixel technology’ means extremely accurate colours and the display is hardware calibrated in the factory to give 100% coverage of the DCI-P3 colour gamut. QHD+ (2,560 x 1,600) resolution means pixel density is lower than a typical 4K (3,840 x 2,160) laptop display.

The Z16 features a choice of 11th Gen Intel Core H series processors, an Nvidia GeForce RTX 3060 laptop GPU with Nvidia Studio drivers, up to 64 GB memory and up to 4 TB of storage spread across two M.2 NVMe PCIe Gen4 SSDs.

■ msi.com

Dell Precision 5560

The Dell Precision 5560 wins hands down when it comes to portability. It’s the thinnest and lightest out of all the 15.6-inch mobile workstations — a mere 7.7mm at the front, 11.64mm at the rear and starting at 1.84kg. And with ultra thin bezels, it’s also notably smaller than comparable machines.

In order to achieve this sleek aesthetic, it only includes entrylevel graphics options including the Nvidia T1200 (4 GB) and Nvidia RTX A2000 (4 GB), which are best suited to 3D CAD / BIM and entry-level viz workflows. However, there’s no compromise on the CPU with options going up to the Intel Xeon W-11955M (8 Core, 2.60 GHz up to 5.00 GHz). The laptop supports up to 64 GB of DDR4 3,200 MHz memory and 4 TB of NVMe PCI 4.0 storage. The IPS 4K ‘Gorilla Glass’ display is also top notch — 500 nits, 100% AdobeRGB and 99% DCI-P3.

■ dell.com/precision

Lenovo ThinkPad P1 Gen 4

The first three generations of this thin and light mobile workstation featured a 15.6-inch display and CAD-focused pro graphics. The G4 edition is a ‘clean sheet’ design with a 16-inch display and higher-powered GPUs, including the ‘professional’ Nvidia RTX A5000 (16 GB) and ‘consumer’ Nvidia GeForce 3080 (16 GB). Memory and storage capacity remains the same with up to 64 GB DDR4 3,200 MHz and up to two 2 TB M.2 NVMe PCIe Gen4 SSDs. To accommodate the higher-end GPUs, which draw significantly more power than those in previous generation ThinkPad P1s, Lenovo has developed a new thermal design. It is also using AI to dynamically manage the ‘cooling budget’. For example, if a workflow is dependent on both the CPU and the GPU, it might set the Total Graphics Power (TGP) to 80W, whereas if a workflow is totally reliant on the GPU, such as GPU rendering, it could go as high as 90W or 100W. Despite more powerful GPUs and a larger display, the ThinkPad P1 Gen 4’s carbon fibre and magnesium alloy chassis has only increased slightly in size and weight – 361.8 (w) x 245.7 (d) x 18.4mm (h) and starting at 1.81kg.

■ lenovo.com/workstations

Microsoft Surface Book 3 (15-inch)

The Microsoft Surface Book 3 launched in 2020 so is the oldest machine in this round up, but it warrants inclusion because it offers something different. At the push of a button, you can remove the touchscreen display and turn it into a tablet. And with the optional pressure sensitive Surface Pen, use it for precision sketching.

The 15-inch display has a resolution of 3,240 x 2,160 and an aspect ratio of 3:2, which is deeper than all the other machines.

As it’s last year’s model, the Surface Book 3 features a 10th Gen Intel Core processor - the Core i7-1065G7. With a boost of 3.9 GHz, performance in CAD will be OK, but with four cores and a base clock of 1.5 GHz it will be significantly slower than others in multi-threaded workflows like rendering.

For graphics, you have the option of the Nvidia Quadro RTX 3000 (6 GB), which is designed for entry-level viz, but its ‘Max-Q Design’ means it will run slower than other machines with the same GPU.

The Microsoft Surface Book 3 has a thickness of between 15mm and 23mm and weighs 1.9kg with the keyboard and 0.81kg without.

■ surface.com

AMD Radeon Pro W6800

This beast of a card is the first pro GPU from AMD with hardware-based ray tracing built in. With a whopping 32 GB of on board memory it’s designed for the most demanding arch viz workflows, writes Greg Corke

Price $2,250

amd.com/radeonpro

It’s been a long time coming but AMD has finally delivered its first professional GPU with hardware ray tracing built in. And with 32 GB of VRAM, the AMD Radeon Pro W6800 is a beast of a graphics card.

Priced at $2,249, the Radeon Pro W6800 goes head-to-head with the 16 GB Nvidia RTX A4000 ($1,000) and 24 GB Nvidia RTX A5000 ($2,250), both of which we review on page WS28.

In terms of raw performance, the Radeon Pro W6800 sits somewhere between AMD’s ‘consumer’ Radeon RX 6800 and Radeon RX 6800 XT. But as a workstation-class card there are several key differences.

First, it will be certified for a wide range of professional applications, including all the major CAD and BIM tools. This can be especially significant for enterprise customers.

It can also support up to six displays, which can be important for powerwalls, and features Error Correcting Code (ECC) memory to protect against crashes. And instead of three axial fans that recirculate air inside the machine, it has a single ‘blower’, which draws in cool air from the top of the card and pushes it out the rear of the machine. This design can be particularly beneficial in multi-GPU workstations.

Perhaps most importantly, however, is that the Radeon Pro W6800 has a colossal 32 GB of on-board GDRR6 memory, double that of its consumer counterparts, and more than Nvidia’s pro GPUs at the same price point.

Monster memory

32 GB is a huge amount of memory for a GPU, surpassed only by the 48 GB Nvidia RTX A6000 which costs twice as much. It means the Radeon Pro W6800 can handle some seriously demanding visualisation datasets. This could be a huge multi-disciplinary city-scale model with immense detail or one with less geometry but hyper realistic assets such as 8K textures or detailed vegetation.

It’s a huge step up from its predecessor, the AMD Radeon Pro W5700, which only had 8 GB and highlights AMD’s ambitions for high-end design viz and real-time ray tracing.

The new GPU features enhanced Compute Units (CU) with dedicated ‘Ray Accelerators’. As this is AMD’s first pro GPU with hardware ray tracing, there aren’t currently a huge number of applications that can take advantage of its ‘Ray Accelerators’, but this is changing.

The list currently includes applications that support DirectX Raytracing (DXR), such as Unreal Engine. Also, any that feature Radeon ProRender 2.0, the latest version of AMD’s physicallybased rendering engine. This includes Solidworks Visualize, Acca Software, Autodesk Inventor, Rhino, Autodesk Maya, and Blender.

Looking to the future, it will also extend to any application that supports Vulkan Ray tracing, including those in development at Solidworks (Project Romulan - tinyurl.com/SW-graphics), Autodesk (One Graphics System - tinyurl. com/Revit-GPU) and Enscape.

The Radeon Pro W6800 will not accelerate ray tracing in Nvidia RTXenabled applications such as Luxion KeyShot, Chaos V-ray, Chaos Vantage, Enscape 3.0 and others.

Of course, the Radeon Pro W6800 can also be used for many other applications that don’t rely on hardware ray tracing. This includes those that use the OpenGL or DirectX graphics APIs, including real-time design viz tools like Lumion or Twinmotion, Virtual Reality (VR) or photogrammetry software.

The GPU is very much focused on viz and is not optimised for FP64 (Double Precision) code, so applications like engineering simulation will likely to continue to be best served by the AMD Radeon Pro VII ($1,899).

Viewport boost

The Radeon Pro W6800 supports a new pro driver feature called Radeon Pro Viewport Boost, which is designed to reduce latency and boost viewport navigation performance.

It detects when a 3D model is moving quickly in the viewport then

Solidworks 2021 SP3 (OpenGL)

SPECapc benchmark (FSAA) - shaded with edges

1.23 Benchmark score (bigger is better)

AMD Radeon Pro W5700 1.72

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000

Nvidia RTX A4000 2.08

2.29

2.52

Nvidia RTX A5000 N/A

0.0 0.5 1.0 1.5 2.0 2.5 3.0

Solidworks Visualize 2021 SP3 (ProRender)

Computer model (denoising disabled)

1,000 passes, accurate quality (1,500 x 1,500 resolution)

1.23 Render time (secs) (smaller is better)

AMD Radeon Pro W5700 351

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000 211

358 Solidworks 2021 SP3 (OpenGL)

SPECapc benchmark (FSAA) - RealView, shadows & AO

1.23 Benchmark score (bigger is better)

AMD Radeon Pro W5700 2.26

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000

Nvidia RTX A4000 3.02

2.90

3.35

Nvidia RTX A5000 N/A

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5

Solidworks Visualize 2021 SP3 (ProRender)

Computer model (denoising enabled)

100 passes, accurate quality (1,500 x 1,500 resolution)

1.23 Render time (secs) (smaller is better)

AMD Radeon Pro W5700 67

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000 45

56

automatically drops the resolution in specific areas to reduce the number of pixels the GPU needs to process. Then, as soon as that movement stops, it restores the full pixel count. According to AMD, this can increase Frames Per Second (FPS) dramatically without impacting the visual experience.

AMD Radeon Pro Viewport Boost currently works with Revit, 3ds Max, Twinmotion and Unreal Engine (for packaged projects only, not currently Unreal Engine Editor). Support for other applications is coming soon.

We explore this in more detail on page WS26

Specifications

The AMD Radeon Pro W6800 is the first workstation GPU to be based on AMD’s 7nm RDNA 2 architecture.

AMD states peak FP32 Throughput (Single Precision) as 17.83 Teraflops of Compute Performance. It is not optimised for FP64 (Double Precision).

With six Mini DisplayPort outputs it can drive up to six displays at 5K resolution or up to two displays at 8K resolution.

The board itself is full height, double slot, with a peak power of 250W. It requires a 6-pin and an 8-pin power connector and should fit most mid-sized tower chassis.

The AMD Radeon Pro W6800 is a PCIe 4.0 graphics card. While it is fully compatible with older PCIe 3.0 workstations, it’s designed to work best with PCIe 4.0 workstations. With double the bandwidth of PCIe 3.0, data can theoretically be fed into the GPU much quicker, although it won’t make a difference in all workflows.

PCIe 4.0 compatible CPUs include 11th Generation Intel Core, Intel Xeon W-1300, AMD Ryzen 5000, AMD Ryzen Threadripper 3900X and Threadripper Pro 3900WX series.

The AMD Radeon Pro W6800 is also designed to work better with AMD CPUs with AMD Smart Access Memory. This essentially gives the CPU better access

to the GPU’s onboard memory. AMD says it unlocks higher performance for ‘key professional workloads’ but did not elaborate further.

The Radeon Pro W6800 also includes 128 MB of AMD Infinity Cache, a ‘lastlevel’ data cache integrated on the GPU die designed to reduce latency and power consumption.

The Radeon Pro W6800 on test

We put the AMD Radeon Pro W6800 through a series of real-world application benchmarks, for GPU rendering, real-time visualisation and 3D CAD.

All tests were carried out using the AMD Ryzen 5000-based Scan 3XS GWPME A132R workstation (see page WS12 for a full review). Resolution was set to 4K (3,840 x 2,160) and we used AMD’s enterprise 21.Q1 graphics driver.

For comparison, we used AMD’s previous generation ‘RDNA’ workstation GPU, the AMD Radeon Pro W5700 (8 GB), plus Nvidia’s brand new ‘Ampere’ workstation GPUs, the Nvidia RTX A4000 (16 GB) and Nvidia RTX A5000 (24 GB), which we review on page WS28.

Unreal Engine 4.26

Over the past few years Unreal Engine has established itself as a very prominent tool for design viz, especially in architecture and automotive. It was one of the first applications to use GPU-accelerated real-time ray tracing, which it does through Microsoft DirectX Ray tracing (DXR). It means the AMD Radeon Pro W6800 is fully compatible.

For testing, we used two datasets, both freely available from Epic Games: an arch viz interior of a small apartment and the Automotive Configurator, which features an Audi A5 convertible. Both scenes were tested with ray tracing enabled (DirectX Ray tracing (DXR)) and without (DirectX 12 rasterisation).

The Radeon Pro W6800 did well with DirectX 12 rasterisation, showing a vast improvement over the Radeon Pro

Lumion 11.5 (DirectX 12 - real time)

Architectural house

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000

Nvidia RTX A4000

Nvidia RTX A5000 9.90

11.60 18.40

16.53

22.30

0 5 10 15 20

Lumion 11.5 (DirectX 12 - real time)

Colossal building (28 GB)

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700 2.30

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000 2.20 6.80 Lumion 11.5 (DirectX 12 - rendering)

Architectural house

8K (7,680 x 3,840 resolution)

1.23 Render time (secs) (smaller is better)

AMD Radeon Pro W5700 510

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000

Nvidia RTX A4000 294

390

280

Nvidia RTX A5000 212

0 100 200 300 400 500 600

Lumion 11.5 (DirectX 12 - rendering)

Colossal building (28 GB)

8K (7,680 x 3,840 resolution)

1.23 Render time (secs) (smaller is better)

AMD Radeon Pro W5700

AMD Radeon Pro W6800 218 1,283

Nvidia Quadro RTX 4000 1,243

W5700, and sitting somewhere between the RTX A4000 and RTX A5000. With real-time ray tracing enabled, however, it fell notably behind both Nvidia GPUs. Without hardware ray tracing built-in, the Radeon Pro W5700 pretty much ground to a halt. Autodesk VRED Professional 2022

Lumion 11.5 Enscape 3.0

Lumion is a real-time rendering tool popular with architects. The 11.5 release uses DirectX 12 rasterisation. It does not currently support hardware-based ray tracing.

The software can work with 8K textures and has a vast object library including trees with leaves that move in the wind, all of which can place huge demands on GPU processing and memory.

We tested the GPUs in two ways: one measuring real-time 3D performance in terms of Frame Per Second (FPS) and two, recording the time it takes to render an 8K scene.

Lumion supplied us with two datasets: a standard architectural house with surrounding vegetation, which will fit into 8 GB of GPU memory; and a colossal building model which needs 28 GB, more than the capacity of the Nvidia RTX A4000 (16 GB) and RTX A5000 (24 GB).

It came as no surprise that the AMD Radeon Pro W6800 came out top when testing the 28 GB model as it was the only GPU able to load the entire dataset into memory. The Nvidia RTX A4000 (16 GB) and RTX A5000 (24 GB) really struggled, especially in real-time 3D where it was very hard to navigate the scene.

With the smaller scene, however, the Nvidia RTX A5000 demonstrated a clear lead over the Radeon Pro W6800 and the RTX A4000 also stood up well, edging out the Radeon Pro W6800 when rendering. Enscape is a real-time viz and VR tool for architects that delivers very high-quality graphics in the viewport. The software has used elements of ray tracing for some time and version 3.0 is RTX-enabled, so hardware ray tracing is supported on Nvidia RTX GPUs. Later versions will use the more modern Vulkan API and support ray tracing on both Nvidia and AMD GPUs.

For our tests, we used a large scene of a building complex and its surrounding area in Enscape 3.0 (non RTX). At 9.5 GB, the GPU memory requirements of this model are relatively high, but Enscape models can be much larger.

In terms of performance, the Radeon Pro W6800 delivered a very smooth experience at 29 FPS, more than double that of the Radeon Pro W5700. It edged out the Nvidia RTX A4000 but was a bit behind the Nvidia RTX A5000.

Autodesk VRED Professional is an automotive-focused 3D visualisation, virtual prototyping and VR tool. It uses OpenGL and delivers very high-quality visuals in the viewport. It offers several levels of real-time anti-aliasing (AA), which is important for automotive styling, as it smooths the edges of body panels. However, AA calculations use a lot of GPU resources, both in terms of processing and memory. We tested our automotive model with AA set to ‘off’ and ‘ultra-high’. As we have seen previously with AMD GPUs, the AMD Radeon Pro W6800 did OK with anti-aliasing set to off, but was still significantly behind ‘‘ The AMD Radeon Pro W6800 stands out the RTX A5000. With antialiasing enabled, however, performance dropped from the competition due to its substantial considerably, with even 32 GB of memory, surpassed only by the the RTX A4000 taking a Nvidia RTX A6000 which costs twice as much substantial lead. ’’ Solidworks Visualize 2021 The name of this GPUaccelerated physically-based renderer is a bit misleading as it works with many more applications than the CAD application of the same name. It can import models from PTC Creo, Solid Edge, Catia and Inventor, as well as several neutral formats. The software was initially programmed to work with Nvidia Iray and, more recently, Nvidia RTX. However, in the 2020 release, AMD Radeon ProRender was added, so users now have a choice of two rendering engines. Both support denoising, a post-processing technique that filters out noise from unfinished / noisy images and means you can get better

Unreal Engine 4.26 (DirectX 12 - rasterisation)

Audi car configurator model (ray tracing disabled)

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000

Nvidia RTX A4000

Nvidia RTX A5000 N/A

16.98 28.83

24.61

33.00

0 5 10 15 20 25 30

Unreal Engine 4.26 (DirectX 12 - rasterisation)

Arch Viz interior model (ray tracing disabled)

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700 24.30

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000 26.89 46.30 Unreal Engine 4.26 (DirectX 12 - DXR)

Audi car configurator model (ray tracing enabled)

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700 N/A

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000

Nvidia RTX A4000

Nvidia RTX A5000 9.40

8.38

13.44

19.02

0 5 10 15

Unreal Engine 4.26 (DirectX 12 - DXR)

Arch Viz interior model (ray tracing enabled)

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700 N/A

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000 5.30

5.21

looking renders with significantly fewer rendering passes.

We tested both AMD and Nvidia GPUs with Radeon ProRender using the PC model from the SPECapc for Solidworks 2021 benchmark. We rendered at 1,500 x 1,500 resolution with 1,000 passes (denoising disabled) and 100 passes (denoising enabled) with accurate quality. Both settings produced excellent visual results.

With denoising enabled, there was little between the Radeon Pro W6800, and Nvidia’s Ampere GPUs, but the RTX A5000 had a bigger lead with denoising disabled.

Solidworks 2021

While most CAD applications won’t benefit from any GPU more powerful that the Nvidia Quadro P2200 or AMD Radeon Pro W5500, Solidworks 2021 is an exception. By using OpenGL 4.5, a more modern version of the popular graphics API, more algorithms can be pushed onto the GPU so there is a benefit to higher performance cards.

Even so, the application is still CPU limited to some extent, so the performance benefit of more powerful GPUs isn’t as big as you’d expect from a dedicated real-time viz tool.

Like most CAD tools, the most popular way to view models in Solidworks is in shaded with edges mode. Using the SPECapc for SolidWorks 2021 benchmark we saw a small improvement over the Radeon Pro W5700, although the Radeon Pro W6800 was behind both Nvidia GPUs.

Solidworks also features more realistic display styles for viewing models in real

With six Mini DisplayPort outputs the W6800 can drive up to six displays at 5K resolution or up to two displays at 8K

With a peak power of 250W, the W6800 requires a 6-pin and an 8-pin power connector

Enscape 3.0 (OpenGL)

Large building complex

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700 13

AMD Radeon Pro W6800 30

Nvidia Quadro RTX 4000 Inconsistent results

Nvidia RTX A4000 27

Nvidia RTX A5000 36

0 5 10 15 20 25 30 35 Autodesk VRED Professional 2022 (OpenGL)

Automotive model (No Anti Aliasing)

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000

Nvidia RTX A4000

Nvidia RTX A5000 30.65

51.85

30.88

48.80

63.30

0 10 20 30 40 50 60 70

VRMark - Blue Room

DirectX 11

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700 49.65

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000 99.77

57.7 Autodesk VRED Professional 2022 (OpenGL)

Automotive model (Anti Aliasing - Ultra-high)

4K (3,840 x 2,160 resolution)

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700 3.30

AMD Radeon Pro W6800 5.45

Nvidia Quadro RTX 4000

Nvidia RTX A4000 7.30

12.25

Nvidia RTX A5000 15.98

0 5 10 15

VRMark - Cyan Room

DirectX 12

1.23 Frames Per Second (FPS) (bigger is better)

AMD Radeon Pro W5700

AMD Radeon Pro W6800

Nvidia Quadro RTX 4000 182.43

173.69 351.17

Arch viz studio Beehive pushed the W6800 to its limits in Lumion on ‘Aedas City’, a visualisation project that features six of the international architecture firm’s building designs. It uses 28 GB (yes, 28 GB) of GPU memory

time. Solidworks RealView, which is only supported by pro GPUs, adds realistic materials and supports environment reflections and floor shadows. Meanwhile, ambient occlusion adds more realistic shadows and helps bring out details.

Both viewing styles are more GPUintensive, so performance is less limited by the frequency of the CPU. In our tests, we saw a bigger benefit to the more powerful GPUs when RealView, shadows and ambient occlusion were enabled.

We were unable to test the Nvidia RTX A5000 as Solidworks 2021 Service Pack 3 did not recognise the card. We expect this to be fixed in SP4, out soon. VRMark We also tested with VRMark, a dedicated Virtual Reality benchmark that uses both DirectX 11 and DirectX 12. It’s biased towards 3D games, so not perfect for our needs, but should give a good indication of the performance one might expect in ‘game engine’ viz tools, although all datasets are different.

The Radeon Pro W6800 came out top in the ‘Cyan room’ test which measures DirectX 12 performance. AMD itself has highlighted how its ‘RDNA 2’ architecture performs well in DirectX 12 applications. The verdict The Radeon Pro W6800 stands out from the competition due to its substantial 32 GB of memory, surpassed only by the Nvidia RTX A6000 which costs twice as much. But you have to take design viz very seriously to need such a huge amount.

Architectural visualisation studio Beehive certainly does. It pushed the W6800 to its limits in Lumion, while working on ‘Aedas City’, a visualisation project that features six of the international architecture firm’s building designs.

And because the project could be held entirely within GPU memory, it managed to massively reduce render times for a 891 frame video – from 36 hours and 11 mins (with the 24 GB Nvidia Quadro RTX 6000 GPU) to 9 hours and 27 mins (with the Radeon Pro W6800).

There are big benefits for projects like this but this is an extreme example. Most design-centric visualisation workflows require significantly less GPU memory, although with ever increasing demands for realism and resolution, this will likely change in the future. There is also a strong workflow argument for more GPU memory, by not having to worry so much about optimising geometry or textures.

For now, in more mainstream viz workflows, AMD faces very stiff

competition from Nvidia. The 16 GB Nvidia RTX A4000, for example, generally offers a little less performance than the Radeon Pro W6800 but costs half as much. Meanwhile, the 24 GB Nvidia RTX A5000 offers parity on price, but has a clear performance lead in some workflows and better software compatibility. One can’t help but wonder if AMD has missed a trick by not pricing the Radeon Pro W6800 more aggressively to make it more competitive in workflows where large memory capacity ‘‘ There is a strong workflow argument for is less important. Or perhaps there’s room for a Radeon Pro W6700? having so much memory on a GPU, by not Nvidia also appears to have having to worry so much about optimising a clear lead in DXR hardware geometry or textures ray tracing, although this is perhaps to be expected. ’’ AMD’s ray accelerators are ‘first generation’ and there is also scope for driver improvements. AMD is innovating in other areas, however. The Radeon Pro Viewport Boost, for example, is an exciting feature that takes a smarter approach to how precious GPU resources are allocated. And this is certainly one to watch for the future. We also wait with interest to see how the forthcoming 8 GB AMD Radeon Pro W6600 shapes up. At $649 it should hit the sweet spot for CAD users who also want a real-time 3D, ray tracing or VR capability. Nvidia doesn’t yet have a pro GPU with hardware ray tracing in this entry-level market segment.

AMD Radeon Pro Viewport Boost

In recent years AMD has allocated significant resources to the development of its Radeon Pro graphics drivers. The new 21.Q2 release promises to increase 3D performance by dynamically reducing viewport resolution, and without impacting the visual experience. Radeon Pro Viewport Boost works with any AMD Radeon Pro GPU but Greg Corke tests it out with the new Radeon Pro W6800

tinyurl.com/viewport-boost

At the beginning of June AMD launched the AMD Radeon Pro W6800, a monster 32 GB professional GPU, which we review in-depth on page 20.

Such a huge amount of on-board memory certainly makes the W6800 stand out from other GPUs in its class. However, the ‘RDNA 2’ workstation card also features a new pro graphics driver feature called Radeon Pro Viewport Boost, which is designed to reduce latency and boost viewport navigation performance.

The idea behind the technology is simple but smart. It detects when a 3D model is moving quickly in the viewport, then dynamically drops the resolution to reduce the number of pixels the GPU needs to process. Then, as soon as that movement stops, it restores the full pixel count. According to AMD, this can increase Frames Per Second (FPS) dramatically without impacting the visual experience.

AMD Radeon Pro Viewport Boost currently works with Autodesk Revit 2021, Autodesk 3ds Max 2021, Twinmotion and Unreal Engine 4 (for packaged DirectX 11 projects only – not currently DirectX 12 or Unreal Engine Editor). Support for other applications is coming soon.

It works best in GPU limited workflows. i.e. those where the GPU is being pushed to its limits and is the bottleneck in the workstation. And, with this in mind, it should deliver the biggest benefits at higher resolutions (4K and above), with larger models and when visual quality settings are maxed out.

Testing Radeon Pro Viewport Boost

Radeon Pro Viewport Boost is enabled in the AMD Radeon Pro 21.Q2 driver under graphics settings. Users have control over the minimum dynamic resolution that the application viewport will drop down to, expressed as a percentage of its native resolution. It can be set between 50% and 83.3%. The lower the value, the bigger the potential performance boost.

To show the extent to which it is working at any given moment, one to four small green dots appear in the top left corner of the viewport – one being the least, four being the most.

The smart thing about Radeon Pro Viewport Boost is that it only works when the model is in fast motion, when the eye is less sensitive to a loss of visual detail.

In both of our Unreal Engine arch viz interior scenes, for example, it only kicked in when ‘running’ (shift, up arrow) and not when ‘walking’ (up arrow only). At 50%, the drop in resolution is clearly visible but only really when you actively look out for it. At 83.3% it was very hard to see any difference.

When modelling in Revit or 3ds max, lines become more pixelated. But with the speed with which one tends to pan, rotate, or zoom-in, to quickly shift focus to a different part of the model, it’s really not detrimental to the overall experience.

We tested on a fairly standard 4K (3,840 x 2,160 resolution) 60Hz IPS panel. There may be a bigger discernible difference on higher spec displays.

The performancebenefits can be huge. In Unreal Engine 4.26, testing with a Paris interior scene from arch viz artist Benoit Derau (benoitdereau.com) we saw frame rates more than double (116%) when minimum resolution was set to 50%. In Unreal Engine’s freely available Arch viz interior scene packaged as a DirectX11 project it increased by 59%.

In Twinmotion the boost was around 30% with the ‘materials room’ demo scene when visual settings were set to ‘ultra’. In 3ds max we saw around a 20% improvement with AMD’s ‘snow bike’ model with high anti-aliasing and “High Quality” shading.

What about CAD/BIM? For design viz applications like Unreal Engine and Twinmotion having the highest quality graphics is always the ultimate goal. However, in 3D CAD and BIM modelling workflows it’s usually less important, with the focus instead on the clear representation of geometry.

With this in mind, the benefits of Radeon Pro Viewport Boost in CAD or BIM applications like Autodesk Revit are less clear. In Revit, the most popular way to view BIM models is in ‘shaded’ mode. However, with this display style enabled we saw no benefit to Radeon Pro Viewport Boost. As with many CAD and BIM tools, the GPU simply isn’t stressed enough, so the CPU becomes the bottleneck instead. It’s only when you start ramping up the quality settings that more demands are placed on the GPU and Radeon Pro Viewport Boost can come into effect. And while the performance increases can be large, we only found a few select scenarios where a substantial benefit could be seen. We tested with four relatively small Revit models and found that if the following criteria were met — realistic display style, smooth lines with antialiasing, transparency enabled, and viewport set to perspective mode — then there was a huge performance gain; almost double the Frames Per Second. But without all of those enabled — especially with the viewport set to orthographic — the performance gains were minimal, or there were none at all. We would be interested ‘‘ It’s good to see AMD innovating to learn how the system works with significantly larger Revit models. It’s also worth pointing by taking a out here that most CAD smarter approach to how GPU and BIM applications already have a built-in feature to help improve resources are viewport performance allocated. Why when working with large bother rendering pixels that most models. In Revit, for example, the ‘simplify display during view people won’t even navigation’ feature notice when (which is switched on models are moving at speed? by default) suspends certain graphics effects and temporarily removes ’’ some objects when the model is in motion. Radeon Pro Viewport Boost gives the best performance boost when this feature is enabled, so you are getting a lower res representation of a model that has already been simplified. Of course, Revit like many CAD and BIM applications, is renowned for being CPU limited so it’s hardly surprising we found reduced benefits for Radeon Pro Viewport Boost. This is especially true for a high-end graphics card like the Radeon Pro W6800, which is complete overkill for Revit. In CAD applications that make better

use of the GPU, such as Solidworks 2021, AMD Radeon Pro Viewport Boost could have a bigger impact in a broader set of viewing styles.

It’s also important to note that CAD applications are changing, with new graphics engines that use modern APIs like Vulkan to push more processing onto the GPU and reduce the CPU bottleneck. This includes future versions of Solidworks (Project Romulan - tinyurl.com/SW-graphics) and Autodesk Revit (and other Autodesk applications) which will use the new One Graphics System (tinyurl.com/Revit-GPU).

What we think

It’s good to see AMD innovating by taking a smarter approach to how GPU resources are allocated. Why bother rendering pixels that most people won’t even notice when models are moving at speed?

From our tests we see a clear benefit for design visualisation, where applications almost always push the GPU to its limits and visual quality is of paramount importance. We’re less convinced with the broader advantages for CAD and BIM software. In Revit, for example, it appears you have to use a fairly specific combination of visual settings in order to benefit. And, in a workflow where the clear representation of geometry is usually the priority, one would also question how many people actually view models that way.

At the moment, application support is quite limited, but this will grow. We imagine AMD is working on support for Unreal Engine Editor as well as DirectX 12, which should be a big attraction for viz artists, especially those working with huge datasets that approach the substantial 32 GB memory limit of the Radeon Pro W6800.

When AMD first announced Viewport Boost it was exclusive to the Radeon Pro W6800 and W6600 GPUs. AMD has now confirmed that it will be expanding support to prior generation Radeon Pro GPUs as well. So, for those that already own an AMD Radeon Pro GPU, this could be a great way to get more out of your investment.

And it’s perhaps with less powerful GPUs like these, that users will get the biggest benefits. In all of our tests we experienced pretty good viewport performance (most well above 20 FPS) even with Viewport Boost disabled. But it’s when frame rates drop lower, and viewports become choppy, that any performance increase can make a huge difference to practical workflows and become far more important than numbers on charts.

With Radeon Pro Viewport Boost enabled and min resolution set to 50% some pixelation is noticeable on this wall painting when ‘running’ in this Unreal Engine scene. The four green dots (top left) show that Viewport Boost is in full effect

As soon as you stop ‘running’ the four green dots disappear and the full resolution image is instantly restored

In 3ds Max, with Radeon Pro Viewport Boost enabled and min resolution set to 50%, there is little difference between the moving image (left) and the static image (below) although the lettering on the bike is not as sharp

Nvidia RTX A4000 / A5000

Nvidia’s new Ampere-based pro GPUs, the Nvidia RTX A4000 and RTX A5000, offer a big step up from the Turing-based Quadro RTX family. With more memory and significantly enhanced processing, they promise to make light work of demanding realtime ray tracing, GPU rendering and VR workflows, writes Greg Corke

Price $1,000 (A4000) / $2,250 (A5000)

nvidia.com

In February 2021 we reviewed the Nvidia RTX A6000, the first pro desktop GPU to be based on Nvidia’s ‘Ampere’ architecture. With 48 GB of memory and buckets of processing power, the dual slot 300W graphics card is designed for the most demanding visualisation workflows – think city-scale digital twins or complex product visualisations using very hi-fidelity textures, such as those captured from real-life scans.

Of course, the Nvidia RTX A6000 is complete overkill for most architects or product designers who simply want a capable GPU for real-time visualisation, GPU rendering or VR. And it’s here that the new Nvidia RTX A4000 and Nvidia RTX A5000 come into play.

Announced at Nvidia’s GTC event this year, the PCIe Gen 4 ‘Ampere’ Nvidia RTX A4000 and Nvidia RTX A5000 are the replacements for the PCIe Gen 3 ‘Turing’ Nvidia Quadro RTX 4000 and Quadro RTX 5000, which launched in 2019.

The RTX A4000 and A5000 are midrange ‘Quadro’ GPUs in everything but name. Nvidia might be retiring its longserving Quadro workstation brand, but the features remain the same.

Both GPUs offer more memory than their consumer GeForce counterparts, are standard issue in workstations from Dell, HP and Lenovo, and come with pro drivers with ISV certification for a wide range of CAD/BIM applications.

And with an estimated street price of $1,000 for the Nvidia RTX A4000 and $2,250 for the Nvidia RTX A5000, they have much more palatable price tags than the Nvidia RTX A6000 which costs $4,650. Nvidia RTX A4000 (16 GB)

With 16 GB of GDDR6 ECC memory, the Nvidia RTX A4000 offers a big step up from the 8 GB Quadro RTX 4000. 8 GB is fine for mainstream viz workflows but for more complex projects it can be limiting, so delivering 16 GB in a sub $1,000 pro GPU is a big step forward. Previously, 16 GB was only available on the ‘Turing’based Quadro RTX 5000.

As you’d expect from Nvidia’s new ‘Ampere’ architecture, the Nvidia RTX A4000 also offers a significant improvement in processing. This can be seen in all areas of the GPU with more CUDA cores for general processing, thirdgeneration Tensor Cores for AI operations and second-generation RT Cores for hardware-based ray tracing. It leads to a substantial performance increase in many different applications (see later on).

Furthermore, as the Nvidia RTX A4000 is a single slot GPU with a max power consumption of 140W delivered through a single 6-pin PCIe connector, it’s available in a wide range of desktop workstation form factors. This includes compact towers like the HP Z2 Tower G8 and Dell Precision 3650.

The board features four DisplayPort 1.4a ports and can drive up to four displays at 5K resolution. It is cooled by a single ‘blower’ type fan, which draws in cool air from the top and bottom of the card, pushes it through a radiator and then directly out of the rear of the workstation chassis. This is in contrast to most consumer GeForce GPUs which use axial fans that recirculate air inside the machine.

There are pros and cons to each design, but with a blower fan you can stack cards within the chassis without having to leave space between them. This means you can get a very good density of GPUs inside a mid-sized chassis.

With the AMD Threadripper Pro-based Lenovo ThinkStation P620, for example, you could get four Nvidia RTX A4000s back-to-back, which could be a very interesting proposition for GPU rendering. Even though the RTX A4000 doesn’t support NVlink (so there’s no pooling of GPU memory) 16 GB is still a good amount and two, three or four RTX A4000s could work out well in terms of price/performance compared to the more powerful RTX A5000 or A6000.

Another potential use case for highdensity multi-GPU is workstation virtualisation using GPU passthrough, where each user gets a dedicated GPU. Again, this workflow looks well suited to the Lenovo ThinkStation P620, which can be configured with up to 64 CPU cores and 2 TB of memory.

Other more niche pro viz features include support for 3D Stereo, Nvidia Mosaic for professional multi-display solutions, and Quadro Sync II, an addin card that can synchronise the display and image output from multiple GPUs within a single system, or across a cluster of systems.

Nvidia RTX A5000 (24 GB)

With 24 GB of GDDR6 ECC memory, the Nvidia RTX A5000 offers only a 50% memory uplift compared to the Quadro RTX 5000 it replaces.

Like the Nvidia RTX A4000 it offers a significant upgrade in all areas of processing — CUDA, Tensor and RT cores.

It’s a double height board, with a max power consumption of 230W which it draws from the PSU via an 8-pin PCIe connector, but it’s still available in compact towers.

The board also features four DisplayPort 1.4a ports and is cooled by a single ‘blower’ type fan, but only draws in cool air from one side of the card.

The Nvidia RTX A5000 supports all the same features as the Nvidia RTX A4000 but differs in two main areas.

One, it supports Nvidia NVLink, so GPU memory can be expanded to 48 GB by connecting two 24 GB GPUs together. Two, it supports Nvidia RTX vWS (virtual workstation software) so it can deliver multiple high-performance virtual workstation instances that enable remote users to share resources. In the Lenovo ThinkStation P620, for example, which we reviewed earlier this year, you could get a very high density of CAD/ BIM users who only need high-end RTX performance from time to time.

Nvidia RTX A5000 inside the AMD Ryzen 5000-based Scan 3XS GWP-ME A132R workstation

Testing the new cards

We put the Nvidia RTX A4000 and Nvidia RTX A5000 through a series of real-world application benchmarks, for GPU rendering, real-time visualisation and 3D CAD.

All tests were carried out using the AMD Ryzen-based Scan 3XS GWP-ME A132R workstation at 4K (3,840 x 2,160) resolution using the latest 462.59 Nvidia driver (see page WS12 for a full review).

For comparison, we used the last two generations of ‘4000’ class Nvidia pro GPUs – the 8 GB ‘Turing’ Nvidia Quadro RTX 4000 (from 2019) and the 8 GB ‘Pascal’ Nvidia Quadro P4000 (from 2017). Three to four years is quite a typical upgrade cycle in workstations, so the intention here is to give a good idea of the performance increase one might expect from an older machine (N.B. to see all of the benchmark scores for the Nvidia Quadro P4000 visit tinyurl.com/ RTX4000).

We also threw some Nvidia RTX A6000 scores in there. These were done on two different workstations with a 32-core Threadripper Pro 3970X and a quad core Intel Xeon W-2125 CPU. While both CPUs have lower frequencies and instructions per clock (IPC) the results should still give a pretty good idea of comparative performance, especially in GPU rendering software.

The results of these tests can be seen in the charts on page WS20 in our review of the AMD Radeon Pro W6800.

Hardware-based ray tracing

It’s been just over two years since Nvidia introduced ‘Turing’ Nvidia Quadro RTX, its first pro GPUs with RTX hardware ray tracing.

In a classic chicken and egg launch, there were very few RTX-enabled applications back then, but this has now changed. For design viz, there’s Chaos V-Ray, Chaos Vantage, Enscape, Unreal Engine, Unity, D5 render, Nvidia Omniverse, Autodesk VRED, KeyShot, Siemens NX Ray Traced Studio, Solidworks Visualize, Catia Live rendering and others.

Nvidia RTX gave GPU rendering a massive kick start and while there is increased competition from hugely powerful CPUs like the 64-core AMD Threadripper [Pro], we are seeing deeper penetration of GPU rendering tools, especially in architect / engineer / product designer friendly workflows.

Nvidia RTX is being used to massively accelerate classic viz focused ray trace renderers like V-Ray, KeyShot and Solidworks Visualize, which we test later on in this article. However, some of the more exciting developments are coming from the AEC sector in tools like Enscape, Chaos Vantage and Unreal Engine, which really make ray tracing ‘real-time’. Vantage, for example, is built from the ground up for real-time ray tracing so can maximise the usage of RT cores within the new GPUs.

Chaos Group V-Ray

V-Ray is one of the most popular physically based rendering tools, especially in architectural visualisation. We put the new cards through their paces using the freely downloadable V-Ray 5 benchmark, which has dedicated tests for Nvidia CUDA GPUs, Nvidia RTX GPUs, as well as CPUs.

The results were impressive. In the CUDA test, the Nvidia RTX A4000 was 1.62 times faster than the previous generation Nvidia Quadro RTX 4000 and in the RTX test 1.70 times faster. The lead over the Pascal-based Quadro P4000 was nothing short of colossal – 3.53 times faster in the CUDA test. As the P4000 does not have dedicated RT cores, it could not run the RTX test.

Stepping up to the Nvidia RTX A5000 will give you an additional boost. Compared to the Nvidia RTX A4000 it was between 1.27 and 1.37 times faster.

Interestingly, the RTX A5000 was not that far behind the RTX A6000, which costs more than twice as much.

rendering. But it’s one of the slickest implementations we’ve seen, allowing users to switch between CPU and GPU rendering at the click of a button.

In the Keyshot 10 benchmark, part of the free KeyShot Viewer, the performance leap was even more substantial than in V-Ray. The Nvidia RTX A4000 and Nvidia RTX A5000 outperformed the Quadro RTX 4000 by a factor of 1.89 and 2.51 respectively. And the RTX A5000 was only 20% slower than the RTX A6000.

Solidworks Visualize Real time 3D

The name of this GPU-accelerated physically based renderer is a bit misleading as it works with many more applications than the CAD application of the same name. It can import models from Creo, Solid Edge, Catia and Inventor, as well as several neutral formats.

Since the 2020 release the software has supported Nvidia RT cores and Tensor cores to improve rendering performance with Nvidia RTX GPUs. Users can choose to render scenes with or without denoising enabled.

Denoising is a postprocessing technique based on machine learning that filters out noise from unfinished / noisy images and is the foundation for many RTX-accelerated applications. It means you can get better looking renders with significantly fewer rendering passes.

DS Solidworks reckons that if a scene routinely needs 500 passes without the denoiser, then you may be able to achieve the same rendering quality with 50 passes with the denoiser enabled.

We tested the stock 1969 Camaro car model at 4K resolution with 1,000 passes (denoising disabled) and 100 passes (denoising enabled) set to accurate quality. Both settings produced excellent visual results.

The RTX A4000 and RTX A5000 delivered the 100-pass render in 22 seconds and 14 seconds respectively. This isn’t the most complex scene but being able to render at such speeds is quite incredible and can have a profound impact on workflows. In comparison, it took the Quadro P4000 GPU 105 seconds, so you can see just how far things have progressed in four years. While GPU rendering is a major play for the Nvidia RTX A4000 and Nvidia RTX A5000, real-time 3D using OpenGL, DirectX and (in the future) Vulkan continues to be a very important part of architectural visualisation, with applications including TwinMotion, Lumion, Enscape, Unreal Engine, LumenRT and others.

Of course, the boundaries between realtime 3D and ray tracing continue to blur. In fact, out of the list above only Lumion and Twinmotion are yet to support RTX, although it should be coming to Twinmotion soon.

To test frame rates, we used a combination of monitoring software including FRAPS, Xbox Game Bar and MSI Afterburner. We only tested at 4K (3,840 x 2,160) resolution. At FHD (1,920 x 1,080) resolution this class of GPU simply isn’t stressed enough. In Autodesk VRED Professional we tested our automotive model with AA set to ‘off’ and ‘ultra-high’.

Considering that this pro viz application used to only really run effectively on Nvidia’s ultra-high-end professional GPUs, it’s quite astounding that the Nvidia RTX A4000 – a sub $1,000 card – delivered over 30 FPS at 4K resolution with medium anti-aliasing. In saying that, those really pushing the

boundaries of automotive visualisation will still likely need the top-end Nvidia RTX A6000 especially for high-res VR workflows. In Unreal Engine we used two datasets: an arch viz interior of a small apartment and the Automotive Configurator, which features an Audi A5 convertible. Both scenes were tested with ray tracing enabled (DirectX Ray tracing (DXR)) and without (DirectX 12 rasterisation). The results were pretty much as expected with good scaling between all the GPUs with DirectX 12 rasterization. With real-time ray tracing enabled, performance naturally takes a hit in general, but the Quadro P4000 really suffers without any RT cores. We also tested with VRMark, a dedicated Virtual Reality benchmark that uses DirectX 11 and DirectX 12. In the DX12 test both GPUs came in second to AMD’s Radeon Pro W6800 (see page WS20). ‘‘ The performance leap from the Quadro RTX CAD and BIM The Nvidia RTX A4000 and RTX A5000 are 4000 to RTX A4000 is hugely impressive, overkill for most CAD and the step up from the four-year old ‘Quadro P4000 is simply phenomenal and BIM applications and are unlikely to give you significantly better 3D ’’ performance than more mainstream GPUs like the Nvidia Quadro P1000 or P2200. However, CAD applications are changing and, in the future, should be able to make much better use of the plentiful power of higher-end GPUs like the RTX A4000 and A5000. In addition, it is important to note that both GPUs will be certified for a wide range of pro CAD / BIM applications, which is important for some firms. This is especially true for enterprises that buy 100s or 1,000s of workstations from large OEMs like HP, Dell and Lenovo and want assurance that the GPUs will be stable and that they will be properly supported by the software developer. Certification is a major reason why

Solidworks Visualize 2021 SP3 (Iray)

1969 Camaro car model (denoising disabled)

1,000 passes, accurate quality 4K (3,840 x 1,080 resolution)

1.23 Render time (secs) (smaller is better)

Nvidia Quadro P4000

Nvidia RTX 4000

Nvidia RTX A4000 1,029

349

211 Solidworks Visualize 2021 SP3 (Iray)

1969 Camaro car model (denoising enabled)

100 passes, accurate quality 4K (3,840 x 1,080 resolution)

1.23 Render time (secs) (smaller is better)

Nvidia Quadro P4000

Nvidia RTX 4000 35 105

Nvidia RTX A4000 22 Luxion KeyShot 10 benchmark (GPU)

1.23 Relative performance to reference system (bigger is better)

Nvidia Quadro P4000 1

Nvidia Quadro RTX 4000 1

Nvidia RTX A4000 1

Nvidia RTX A5000 1 5.60

32.94

62.34

82.60

V-Ray 5 image courtesy of Toni Bratincevic

some firms choose Nvidia’s pro-focused RTX GPUs over their ‘consumer GeForce’ counterparts so they can confidently use applications like Revit, Solidworks, PTC Creo, and Siemens NX alongside more vizfocused tools like Chaos V-Ray, Enscape, Luxion KeyShot and Solidworks Visualize.

Solidworks 2021 can make better use of powerful GPUs than most CAD applications but it is still CPU limited to some extent, so the performance benefit the new cards give you isn’t as big as you’d get from a dedicated real-time viz tool.

In the SPECapc for SolidWorks 2021 benchmark we saw a small improvement, generation on generation with the shaded with edges. The Nvidia RTX A4000 was 1.10 times faster than the Quadro RTX 4000 and 1.44 times faster than the ‘Pascal’ Quadro P4000.

With RealView, Shadows and Ambient Occlusion enabled we saw a bigger benefit over older GPUs. The Nvidia RTX A4000 was 1.16 times faster than the Quadro RTX 4000 and 1.57 times faster than the Quadro P4000.

We were unable to test the Nvidia RTX A5000 as Solidworks 2021 SP3 did not recognise the card. We expect this to be fixed in SP4, out soon. Conclusion

With the new Nvidia RTX A4000 and A5000, Nvidia has made its ‘Ampere’ GPU architecture much more accessible to a wider audience. In particular, we see the sub $1,000 Nvidia RTX A4000 hitting the sweet spot for designers, engineers or architects that want a pro viz capability in their workflow.

The performance leap from ‘Turing’ to ‘Ampere’ (Quadro RTX 4000 to RTX A4000) is hugely impressive. In realtime 3D, a 45% to 60% boost, generation on generation, seems typical, with even bigger gains from real-time ray tracing when the enhanced RT and Tensor cores come into play. The step up from the four-year old ‘Pascal’ Quadro P4000 is phenomenal, especially for rendering.

Equipping the RTX A4000 with 16 GB of memory is very significant. While we often see models/scenes that surpass 8 GB (the capacity of the previous generation Quadro RTX 4000) scenes that are 16 GB and above are certainly less common, and more the preserve of viz specialists than most architects or product designers who use standard materials and assets.

For viz workflows that need lots of memory, Nvidia has strong competition from the 32 GB AMD Radeon Pro W6800, but in less demanding worflows Nvidia’s biggest competitor in pro graphics is currently itself.

The new 12 GB ‘consumer’ GeForce RTX 3080 Ti, for example, might have half the memory of the Nvidia RTX A5000, but offers more performance on paper for half the price. Nvidia even has a GeForce Studio driver for applications including Enscape, Unreal Engine and V-Ray.

Despite the obvious attraction of Nvidia’s consumer GPUs, Nvidia’s ‘A’ class models should continue to find favour in large firms and enterprises that buy in volume, want more memory, consistent supply, pro viz features or the assurance of certification.

Nvidia still has some work to do to flesh out its Ampere family. While mobile workstations already have entry-level RTX A2000 and A3000 GPUs, there’s no equivalent for desktops.

The AEC industry would certainly welcome a sub $500 pro RTX GPU to replace the Pascal-based Quadro P2200, which is now long in the tooth. In years gone by, we would have expected to see a desktop RTX A2000 before the end of 2021, but with ongoing supply challenges and high demand, things are very hard to predict right now.

Chaos Group V-Ray 5.0 benchmark

V-Ray GPU CUDA

1.23 vpaths (calculations per minute) (bigger is better) Chaos Group V-Ray 5.0 benchmark

V-Ray GPU RTX

1.23 vrays (calculations per minute) (bigger is better)

Nvidia Quadro P4000 1

Nvidia Quadro RTX 4000 1

Nvidia RTX A4000 1

Nvidia RTX A5000 1 290

634

1,025

1,299 Nvidia Quadro P4000 1 N/A

Nvidia Quadro RTX 4000 1

Nvidia RTX A4000 1

Nvidia RTX A5000 1 919

1,559

2,128

For more performance data see charts on page 20 as part of the in-depth review of AMD Radeon Pro W6800 with 32 GB of on-board memory

Dell U4021QW UltraWide

Dell’s UltraWide monitors are wellregarded in the design and engineering industry. Al Dean takes a look at the latest product, which targets the home worker who needs that little bit extra when it comes to screen real estate

The way we look at our computing hardware has changed over the last 15 months. After all, many of us now find ourselves more regularly interacting with others through a screen, rather than face to face – plus we’re stuck working in our own homes.

With these limitations in mind, the idea of being hunched over a laptop screen every day isn’t appealing, even one with a 17-inch display. It’s just not good for our working practices, or our spines.

As a result, the idea of a display device that supports better posture, gives you a lot more pixels to play with and which doesn’t look like it’s been mandated by a corporate IT department, is appealing.

This is where the new Dell UltraWide U4021QW display has some real strengths. Having spent the last two months in close contact with the 40-inch 5,120 x 2,160 (5K2K) resolution display, I can share what day-to-day life is now like. Set up

The first steps are to attach the stand, adjust it for height and tilt, then figure out connectivity. Here, you have a number of options: USB-C (or Thunderbolt), DisplayPort or two HDMIs. While the ThunderBolt and DisplayPort options give you the full 60Hz refresh option, it’s worth noting that this steps down to 30Hz for HDMI.

My personal choice was to use DisplayPort to connect to my trusty desktop workstation, and the powered USB-C port to connect to my MacBook Pro – the benefit of the latter being that I don’t need a separate power cable, as it draws under 90W. However, most workstation-class laptops will be way over this.

The set-up process is pretty slick – with one caveat (one that says more about the author of this review than it does about Dell’s user experience team).

You scroll through the options on a small menu to the lower right of the screen using a joystick. I completely missed this first time round, so spent a good 15 minutes switching the monitor off/on and jabbing what I already knew was not a touchscreen device. You have been warned!

Once you’ve updated your resolution settings and installed the Dell Display Manager application, you can then start to explore what the 40-inch 5,120 x 2,160 display feels like to use. Personally, I also like to calibrate a new display every week or so for the first month or two, just to ensure that the panel is as close to accurate Product spec as possible. I use DataColor’s Spyder device for this. ■ 40” LED-backlit LCD curved monitor In use ■ 5,120 x 2,160 My usual set-up is two 27-(WUHD) resolution. 21:9 Aspect Ratio inch monitors, both running ■ 946.6 mm x 248 at HD, rather than 4K. By mm x 457.8 mm (including Stand) ■ Pixel Pitch 0.1815 mm comparison, and in terms of screen real estate, the U4021QW is the equivalent ■ Frequency 60Hz of one and a half, but it’s the (via DisplayPort) / 30Hz (via HDMI) 5K2K resolution that really ■ Brightness makes it sing. It’s clear, crisp 300 cd/m² and represents colour pretty ■ 2 x HDMI, accurately. The calibration

DisplayPort and Thunderbolt 3 I ran only had to tweak the ■ 100mm x 100mm settings a little.

VESA Mount Compared to running two ■ Height Adjustment monitors, I don’t miss the 120mm, Swivel 60, Tilt -5/+21 separation of the two displays ■ 13.8 kg and found that window ■ 3-year Advanced management wasn’t a concern, Exchange Service and Premium Panel Exchange to on-site even if you’re essentially losing half a monitor. If you’re ■ £1,610 (Ex. VAT) accustomed to using a single ■ dell.com display, then this is going to feel incredibly expansive.

If you’re a Windows user, then the Dell Display Application is useful for zoning up your display and having application windows snap to known positions. You can set this up, then save it as a preset so that your work environment is always the same. I only wish there was a similar tool for Mac OSX.

Managing two machines

Modern, well-designed monitors (rather than consumer-grade displays) come into their own in the way they tackle the needs of professional users.

An excellent example is how this system manages inputs. For example, I often run both the Mac and the Windows workstation at the same time, so having quick options to switch inputs is very useful, without too much pressing of cryptic buttons, rather using the onscreen menu and rear-mounted joystick/button combination.

There are also some nice options to explore: split-screen and picture-inpicture (PnP). This means you could split the screen into two halves, with each machine represented in its own half.

The PnP option also means you can have the bulk of your screen showing one machine, and a smaller quarter or eighth of the screen showing what’s going on in your other.

In conclusion

The Dell U4021QW is a great display. Whether you’re looking for a replacement for a single or dual-monitor set-up, or perhaps replicating it at home, then it’s got pretty much everything you need in terms of aesthetics. In other words, it looks pretty slick compared to many, but also offers flexibility of inputs, window control and more.

Ultimately, the quality of the display is what makes or breaks this hardware, and on this count, it’s a winner. Clear and crisp, it also represents the full colour range accurately.

To be honest, Dell might need to send around the heavies if it wants it back.

Compared to running two monitors, I don’t miss the separation of the two displays and found that window management wasn’t a concern, even if you’re essentially losing half a monitor ’’

5K2K monitors: what other options are there?

LG 34WK95U-W

Screen size: 34-inch Price: $1,499 Web: lg.com

Notes: LG has been a favourite of ours for a while, having spent years with a trusty 28-inch LG CRT back in the day. The LG 34WK95U-W is 34-inch ultrawide, but still manages to pack in 5,120 x 2,160 pixels. It strangely comes in 7kg heavier than the Dell. LENOVO THINKVISION P40W-20

Screen size: 40-inch Price: $1,699 Web: lenovo.com

Notes: When we tested out an early variant of this (the ThinkVision P44w), it really impressed, so we have no reason to doubt that fitting this form factor out with a 5K2K display will rock. A few tweaks, such as the phone stand in the base) are pretty nice too. MSI PRESTIGE PS341WU

Screen size: 34-inch Price: $999 Web: msi.com

Notes: MSI describes this as a display for creators (read: We made it in white). It looks stunning and a decent colour gamut (98% of DCI-P3) means it should display colours accurately. Whether it’ll be those colours in two years’ time remains to be seen.

Hybrid working what does it mean for AEC firms?

With many firms re-evaluating office space and working from home policies, we asked Adam Jull of IMSCAD about the role that virtual workstations can play – and the differences between VDI, public and private cloud

The pandemic has shifted the way we all work. As the dust settles and we try to get back to some sort of normality, a hybrid work model seems to be the favoured approach of employers and employees alike. But what does this really mean for AEC firms who have historically invested a lot in their offices / studios to provide great spaces for client facing and for collaboration amongst staff?

The nature of hybrid work means that staff will be in the office less and split their time between the office and home – or a remote location that is not the traditional office.

As a result, less office space is required, but will this translate into firms reducing their office footprint, which could save considerable cost, or will it lead to firms changing the way they use their existing office space?

The answer is both. Those firms that can downsize now, surely will. However, those that cannot (perhaps they have a longer lease they cannot get out of) will still likely change the way they use their existing space.

In both cases, it is likely that fewer desks will be needed, staff will be hot desking, and more collaboration and meeting spaces will be required. With staff working from home for two or three days a week, firms need to ensure they invest in the right remote graphics technology solutions so everyone can work from anywhere productively.

As we start to come out the other side of the pandemic, and attitudes to the traditional office continue to change, firms are looking for more robust IT solutions to support flexible working ’’

The tech options

In the AEC space, the majority of designers use industry-specific applications that require a lot of resource so they can be used to their full potential. Investing in the right technology to facilitate this move to hybrid working is paramount.

VDI (Virtual Desktop Infrastructure) is a proven solution for giving design users the performance they require when working remotely. VDI technology enables you to deliver desktops via virtual machines (VMs) and the desktops are managed from a central server. There are several ways for firms to deliver these virtual desktops to their users.

On-premise or on-prem VDI is deployed on your own infrastructure. This could be one server in your office or a private data centre, where you are responsible for the infrastructure and everything associated with it. A real positive of this sort of solution is that the servers can be provisioned to suit your use case, your specific application-mix and the individual user workloads.

Private cloud or hosted VDI is when virtual desktops are outsourced. They are deployed and hosted in a hosting company’s datacentre, on bare metal servers, and the hosting company is responsible for the infrastructure. Like the

Learning by example VDI and private cloud

Parkhill, a US architect and engineering firm with nine offices spread across Texas, Oklahoma and New Mexico, has deployed a successful VDI solution that gives 400+ creative professionals the freedom to work anywhere, on any device using all of their applications including the Autodesk suite.

Parkhill invested in an on-premise solution, with 16 servers running 430 desktops using Citrix. In light of the pandemic, Parkhill will tell you it has gained an advantage over its peers from a mobility standpoint and an IT management perspective.

The solution has come into its own during the pandemic, with nearly all staff working from home and now working two or three days a week away from the office.

On a smaller scale is a UK-based manufacturing firm that was looking to minimise future disruption on productivity, in terms of hours and days lost with users not being able to work to their full potential when they could not access the office during the pandemic.

The firm needed a solution to enable remote work for 50+ design users and chose a private cloud solution. The solution included three servers, two of which include two GPUs, providing desktops to 20 users each and the third, one GPU providing desktops for the further 12 users.

Another example of a successful on-premise VDI solution is that of HUNT EAS, a New York-based engineering, architectural and surveying firm. Hunt had an existing Virtual desktop solution in place, but the solution had never performed, and the company was looking to upgrade from the existing four server environment providing desktops for 65 users to one providing desktops to 90+ users.

The new environment consists of five servers and provides all users great performance running AutoCAD, Autodesk Revit and other applications. It has proved invaluable over the last year through the pandemic as well as when major snowfall affected New York last winter.

on-premise solution, firms get the same benefits of provisioning infrastructure to match their use cases, applications and workflows with the extra advantage of not having to buy or manage the hardware.

Hyperscale or Public Cloud Desktops- e.g. Microsoft Azure, AWS, Google Cloud. This is where a firm pays for ‘instances’ of virtual desktops. While spinning up a new instance is easy, a VDI solution on the public cloud still requires extra underlying infrastructure and integration components to work. From the architecture and networking to the security, the customer is responsible for everything they build in the public cloud.

What are the benefits?

The benefits of these types of solutions are many - most importantly furnishing your users with the performance they require when working both in the office and at home - or anywhere else for that matter. Historically, users have been reluctant to give up their physical workstations as they have worried about not getting the same performance when using other technologies.

This was highlighted for many as they tried to work from home on hastily put together remote work solutions that did not give the performance they required when they suddenly had to vacate the office.

Your IP is obviously very important and security is another great benefit of this type of solution.

With all data being stored centrally, no data leaves the infrastructure. With applications also kept centrally on the servers, all patches, upgrades and fixes can be managed centrally as a single instance and the management of users, their access to applications and data is also centrally controlled by IT which will result in a reduction in IT management costs.

With users not relying on their physical workstations and not having to be at their desks, firms are far more resilient to any interruptions. In addition to obvious pandemics, this could be due to bad weather stopping travel or a fire at the office. With a virtual desktop solution there is a built-in business continuity capability, meaning users can remain working and therefore productive no matter what happens.

What are the real costs?

Cost is obviously a major factor in determining the solution that you will go with, and the costs associated with each of these deployment methods will vary.

When looking at an on-premise solution, there is a large CapEx (Capital Expenditure) required as the infrastructure needs to be purchased, racked and set up before you can start provisioning the desktops. Once fully functioning, the cost are minimal - yearly licence renewals and support.

With a private or hosted cloud solution, there will be an upfront fee to set up and then a monthly cost, keeping CapEx to a minimum and enabling a more manageable OpEx (Operational Expenditure) model.

When looking at a Public Cloud VDI solution there are no upfront costs - you just pay for the instances. However, you will need to add additional products and components to your virtual infrastructure, from domain controllers to VMs and each comes with an associated cost on top of the cost of each instance.

One step at a time

Not all firms move to VDI in one go. Some firms continue to utilise their physical workstations and often use the VDI solution to facilitate remote work. Often they will sweat the workstation until end of life, by which time the user is accustomed to working on a virtual desktop and the change to doing so full time is not so dramatic.

Another facet to this is those firms that are starting small, providing a VDI solution for a single team or for a specific project - in essence using this as a pilot. Some firms will actually invest in a single server either on-premise or a private cloud, specifically as a pilot project and get users from all over the firm to test. Both are sensible approaches.

There are also firms employing a hybrid approach, as it is a big ask to migrate to the cloud in one single leap. You also need to factor in that public cloud providers might not be the most cost effective for the types of applications and workloads that AEC firms are using.

Closing thoughts

When the Covid-19 pandemic first hit, many AEC firms had to quickly adapt existing IT to allow architects and engineers to work effectively from home. Data centralisation was key, so remoting into office workstations quickly became a temporary fix for some.

Now as we start to come out the other side, and attitudes to the traditional office continue to change, firms are looking for more robust IT solutions to support flexible working. And proven technologies like VDI present a huge opportunity for those looking to fully embrace a truly hybrid future.

AEC Magazine’s cloud workstation roundup

Amazon Web Services (AWS) offers two cloud workstations; G4dn uses Intel CPUs and Nvidia GPUs while G4ad features AMD CPUs/GPUs. AWS uses the latest AMD Radeon Pro GPUs but doesn’t offer the same flexibility as Microsoft in terms of how GPU resources can be allocated.

BOXX has a different take to other cloud workstation providers insofar as it takes its high-performance desktop workstations and makes them available to users of CAD and demanding viz applications over a 1:1 connection. The service is currently focused on the US, but is coming to the UK soon.

Cloudalize has lots of experience in the AEC sector with products like Revit. It offers a ‘payper-use’ or an ‘unlimited-usage’ subscription model and a big choice of cloud workstations out of its private datacentres

Google offers a big choice of GPUs for its virtual workstations including the Nvidia P4, T4, and P100 GPUs. Several come preconfigured in the Google Cloud Marketplace.

Microsoft Azure NVv4 virtual workstations stand out because they offer huge flexibility in how GPU resources can be allocated. The AMD Radeon Instinct MI25 GPUs can be virtualised at a hardware level, so you only pay for the GPU resources you need, as opposed to many other cloud workstations which use a whole GPU.

Nutanix Frame is a Desktop-as-a-Service (DaaS) solution designed specifically to deliver 3D applications or desktops to any device. The flexibility of the Frame service allows firms to select their preferred infrastructure (Azure, AWS, GCP or hybrid environments powered by Nutanix AHV) to allow them to reach users in more than 200 countries. The company places a big emphasis on ease of deployment.

Scan, best known for its desktop workstations, teamed up with Ebb3 for its Scan 3XS Cloud Workstation. Read this AEC Magazine article to learn more tinyurl.com/scan-ebb3

Tehama offers an enterprise desktop-as-aservice with a big emphasis on security. It allows firms to create ‘secure virtual rooms and desktops’ on the cloud.

Workspot aims to simplify the deployment and management of cloud workstations by delivering virtual desktops as a service. This includes Windows 10 desktops and Windows 10 cloud workstations, including the AMD-based Microsoft Azure NvV4 Virtual Machines (VMs). The company has extensive knowledge of the AEC sector.

Check out our shiny new website! aecmag.com

• Brand new look • Enhanced navigation • The hottest AEC technology news • Community comments • Mobile friendly • The entire back catalogue of digital magazines

Join our online community

Waiting for Augmented Reality

Design visualisation technology has seen huge leaps and bounds in the last five years, from realtime ray tracing to the availability of low-cost VR headsets. So, what happened to Augmented Reality? It seems to have got lost on the way to the party. Martyn Day searches for answers

Patience is a virtue. I guess it’s a virtue that technology fans are not renowned for. We always want the best processors, fastest graphics, and highest resolution displays in the smallest form factors. The industry does very well to try and keep up with our demands with regular new dollops of power every year, but sometimes it promises exciting new worlds and capabilities that always seem to be a few years away.

This all may sound very “it’s 2021 where is my flying car?” but I can’t help but feel that Augmented Reality (AR) headsets like the Microsoft HoloLens or Magic Leap 1 still seem experimental, too expensive and are so capability-constrained that they ultimately disappoint.

Looking at Virtual Reality (AR’s more popular cousin) there are a number of options, from commodity consumer priced headsets such as the Oculus Quest (£299) and HTC Vive (£599), Valve Index (£1,399) to high-end, high fidelity, enterprise solutions like the Varjo VR 3 (€3,195). While it was years in the making, and it took several decades to shrink the necessary hardware from the size of a room to the size of a phone, VR headsets have since rapidly dropped in price and become commodity items.

Some no longer require incredibly powerful workstations and many have managed to become free from cables and tethers. AR headsets, despite billions being spent in development by many companies, still have high-end price tags, tend to be fairly bulky and are out of reach for most in the AEC industry. All this leaves me wondering if AR will ever go mainstream and become a commodity, beyond holding up the phone or an iPad to get an image augmented overlay?

While there is a lot of talk about Apple working on some lightweight AR glasses, these have never appeared despite always

being a year away. And the danger here is with Apple mobile solutions, all products in the ecosystem seem to be extensions of the phone. The Apple Watch is an extension of your phone, iPad is just a big phone, and I fear that the Apple glasses will primarily be a display for notifications and other personal digital assistant features. Apple has pushed out the boundary for AR and its mobile devices with built-in LiDAR. However, most ‘‘ [With edge computing] the headset can be super of the applications and commercial uses which I have seen, have tended lightweight. You don’t need a big battery, you just to merely offer novelty need good display technology, 5G and some cameras. value. All the AI, all of the processing is taken off the device Chris Bryson, Sublime, Edify AR is in need of a killer set of spectacles. To find out what the ’’ problems were, and if any breakthrough technologies were in the pipeline to change my perception, I talked with industry veterans Martin McDonnell (Soluis, Sublime, Edify), Chris Bryson (Sublime, Edify) and Keith Russell (Magic Leap, but formerly Autodesk, Virtalis and an industry XR consultant). One of the fundamentals pushing the commoditisation of VR was the huge market for application in games. I asked

Magic Leap’s vision for Augmented Reality (AR) in architecture

McDonnell if this was a reason the AR headsets were not as advanced?

Martin McDonnell: AR does have the market appeal, but the current hardware can’t deliver at the price point. But when you can attain it, and the product experience levels up one more generation in terms of field of view, the usefulness will open up many, many, many markets around AR.

It’s the internet multiplied by mobile as a level of disruption. It’s definitely, in my opinion, going to change the game massively. The problem is it needs to be lightweight and cheap. It needs to be a device that has an 8-hour battery life. All of those boxes need to be ticked. When that happens, then everything opens up and it isn’t just enterprise that can afford it.

Chris Bryson: Microsoft [with the HoloLens] has come into the market and included the whole SLAM [Simultaneous Localization and Mapping] technology, all of the AI, the onboard processing, as effectively a proof of concept to enable people to develop high-end AR apps.

This shows you what could be done. The issue is that everything else, the display technology itself, the battery technology, is two or three generations away from doing the same thing that Facebook has just done with the Quest 2. I’d say there’s five plus years to get there.

Now who’s got the deep enough pockets to pay for all that? Well, interestingly, you know Snap (as in Snapchat) just bought WaveOptics (enhancedworld.com) who is a provider of AR display technology based in Oxford. Its key technology is using semiconductor processing, high volume processing to make the displays. Microsoft and Magic Leap use waveguide optical technologies, which are almost bespoke, almost one offs. WaveOptics have got a volume manufacturing process going for AR glass and Snap has already proved it’s interested in productising AR glasses. These commodity displays solve one of the key problems and that’s cost.

As to CPUs, Qualcomm is still the only game in town in terms of processors for the masses, while Apple makes its own silicon. Qualcomm is the provider of the processor technology, the 5G technology that’s going into the Oculus Quest 2, but really in the future, instead of having all of that processing on the device, the next step is to use 5G, so you can have that processing in what we call Edge Computing, off the headset, and then you’re just streaming that data to and from the headset.

This means the headset can be super lightweight. You don’t need a big battery, you just need good display technology, 5G and some cameras. All of the cool stuff, all the AI, all of the processing is taken off the device.

Remote control

Removing the processing overhead from the device and using 5G to stream data is a very interesting concept. 5G can deliver in excess of 100 Megabits-per-second (Mbps) and enable headsets to connect to cloud-based compute power. McDonnell and Bryson have already carried out a simple proof of concept on the Glasgow subway 5G network.

Martin McDonnell: It’s not so much bandwidth, but latency that’s the problem. We could almost get there with 4G today, in terms of throughput with compression, but actually super low latency, sub-20 millisecond latency is what’s needed. At that point everything opens up because with the types of BIM models we’re talking about, massive CAD models, all of those can be on your server. You don’t need to then try and lightweight and optimise them to run on a mobile

phone type of processor.

That, to me, is the solution looking forward, as you’re always going to prefer to have a big, beefy bit of processing on-site somewhere, and with 5G, as soon as that becomes commonplace, or at least installable where you need it, then we can go really lightweight on the user’s device.

Local delivery

With so many companies already having powerful desktop workstations, especially with GPUs like Nvidia RTX, could local workstations be used instead of expensive high-spec cloud instances?

Martin McDonnell: 100%, yes. We are enabling that for a VR customer right now and they want to go with Quest 2 or Pico G2 headsets, but you can pretty much throw up a custom 5G solution that will give you access to your local RTX power. It’s mind-bendingly awesome when you can throw around colossal datasets and see them on a sub £300 headset. We’ve got a few problems with Oculus, which is owned by Facebook and there are privacy concerns.

Chris Bryson: CloudXR from Nvidia is another solution which enables access to your RTX on your beefy machine, which can be tethered to a 300 quid headset.

You could use Wi Fi, but the nice thing is that 5G is low latency to be able to do that too. So in theory, one server, one card can do multiple users, maybe up to four users.

With infrastructure that’s going to be around in the next year or two, we will be able to do some really amazing things and that’s what we’re targeting with Edify, initially for VR, immersively being able to bring in big CAD models, with a server probably on premise with WiFi. And then, very quickly, to be able to expand that to AR with edge compute, which is basically the cloud as long as the datacentre is in your country or near your setting.

In the future, 5G service providers will make much smaller base stations that will also go within offices. So here - offices, factories, large enterprises - will be able to deploy their own 5G on premise as well.

The whole 5G protocol was designed with the specification of ‘a million subscribers within one square kilometre in a city, getting up to a gigabyte of bandwidth each’. Now, that’s not been achieved yet. The concept was designed with countries like South Korea in mind, with really dense coverage of people that need low latency, high bandwidth mobile. As soon as the AR glasses are cheap enough, you could have everybody walking around in a city, and all of that cool processing that currently takes a £4K Hololens to do SLAM like object recognition, etc., you’ll be able to do that do that on the beefier processors in the datacentre.

Development velocity McDonnell and Bryson come from an application development background. I wondered if part of the reason AR was embryonic was because the developer tools are not there yet? Martin McDonnell: I think

‘‘ [AR] is they are there. I think the problem is somewhat been pre-solved by VR. VR is a is definitely going to change the a really good test ground for AR applications and with the Varjo-type crossgame massively. over headset (XR-3 mixed The problem is, reality) we can start in VR it needs to be and move to AR and we get to continue to test. We see lightweight and VR to AR as a continuum, cheap. It needs the same data flowing to be a device with an 8-hour through XR experiences. I believe in the future, VR and AR terminology goes battery life away and it will be XR or

Martin something, and we’ll just McDonnell, Soluis, Sublime talk about that digital reality, a digital overlay. I think what’s missing is a massive body of knowledge and skill and experience in UX design. If we think back to when the ’’ web arrived in the 90s everybody lifted their sage desktop layouts and slapped them on the web. We had page layouts from magazines all over the Internet for years until someone said ‘wait a minute, I’m on a screen, I’m scrolling, this is different. Eventually the penny drops, and we get things like apps, and we get a whole different form of experience on the screen. I think that’s the journey we need to go on, particularly for AR. Hardware limitations

One the major technical limitations of today’s generations of AR headsets is the limited field of view in which overlays can be displayed. Human eyes are amazing things with a very wide field of view. The clipping point where the computer graphics stops is all too apparent and hampers the immersive experience. I asked is the limited view in all AR headsets stops people from adopting?

Martin McDonnell: I don’t think it really matters yet, because they just haven’t got a lightweight cheap device that everyone could have a go at. I think the field of view on Magic Leap and HoloLens 2 is OK for a bunch of tasks. It’s perfectly functional and useful. But not as an immersive experience, to get close to the kind of feeling of emotion you get from VR. Digital objects overlaid in the real world.

Chris Bryson: That’s really difficult to do! It’s definitely a ‘Scotty can you change the laws of physics?’ One technology that might work is micro LEDs, but they are right in their infancy. Facebook bought a couple of micro LED small companies: one actually from Scotland, to create tiny projectors, because the smaller your projector is, the easier it is for you then to control the light and make those wider viewing angles. It’s now kind of a landgrab for some of that key fundamental technology.

Battery technology is never going to improve that quickly, as far as I can see. Again, it’s offload on the cloud, all that processing, and then you don’t need to use so much power.

Martin McDonnell: I actually think the right solution for meeting the short-term needs will be to use the phone. We accept the weight size of the that in our pocket these days. Using that to drive your glasses is a really obvious one - it’s effectively what Magic Leap did, but a physical tether. I think you’ll carry a battery in your pocket to power the device.

That point brings me to the conversation I had with Keith Russell, director, enterprise sales EMEA at Magic Leap. Russell is an industry visualisation veteran, working in both software CGI developers, as well as with the mixed reality, VR and AR hardware firms.

Magic Leap is an infamous company in AR for a number of reasons - having raised $3.5 billion in funding with signif-

icant input from Google, Disney, Alibaba and AT&T, being notoriously secretive, yet loving hype, having a real character of a CEO - Rony Abovitz - being almost sold for $10 billion and then hitting a wall when the long hyped product, a pair of $2,300 goggles, eventually shipped.

Magic Leap became the kind of the ‘WeWork of AR’. Rather than describing Rony Abovitz, it’s probably best to watch his TED talk, ‘The synthesis of imagination’ which is probably one of the weirdest TED talks ever given (tinyurl.com/ MagicLeapTed).

However, the device it did produce was beautiful, perhaps not quite as radical and game changing as the company had promised - it still had a narrow field of view, expensive lenses, and the CPU and GPU came in a ‘Lightpack puck’, but these were somewhat underpowered. It became a popular tool amongst developers, less so a commercial success.

In 2020, the company slimmed down and got in new management, headed up by Peggy Johnson (formerly executive Vice President of business development at Microsoft). Magic Leap secured new investment and has since re-focussed on the enterprise market, aiming at applications in the medical, training, factory and construction markets. Its next generation device is currently in development and will be available in 2022.

It promises to challenge existing form factors, being half the size and half as light as the first generation with double the field of view. That really would be an interesting device, but it’s not going to be cheap.

Architectural barriers

AR has certainly piqued most interest in construction and field work, as opposed to design and architecture. I asked Russell why that was?

Keith Russell: In terms of AR adoption in AEC, it’s key to understand the different groups and types of companies within the AEC market. I think architects, generally, have been slower to adopt the technology, partly because of their need to pick a project to use it on, and the consideration of how they will bill it to a project or client. It’s a bit chicken and egg, and hence it’s the case that their typical billing method makes them cautious of adopting new technology without a project to apply it to.

With engineering and construction companies however, it’s different. If there’s a tool that can be proven to save time or money on a project, or does some-

1 2 VisualLive, recently acquired by Unity, uses the Microsoft HoloLens to overlay CAD/BIM models onto the construction site to review the design, validate against the existing conditions, verify install completion, create reports, and collaborate in real time 3 Trimble SiteVision can be used to place and view georeferenced 3D models, above and below ground, with centimetre accuracy using Global navigation satellite system (GNSS) technology

thing faster, they’re much more receptive because they’re not billing it; instead it directly helps them save costs. They tend also to have a budget for internal tools and development.

Hence, if you get the chance to explain the advantages of AR collaborative meetings vs sending six people to site to hold a meeting, it’s an easy time and cost saving.

When Covid restrictions came into play, it wasn’t possible to send six people onsite, so that really accelerated the adoption rate. Hence, the industry started reaching for tools that allowed them to virtually meet or at least have one person onsite guiding the conversation with other remote based colleagues. Therefore, we saw a huge interest in smart glasses to use as a connected device for the ‘see what I see’ use case. It’s not augmented reality, but a simple ‘assisted reality’ use case that allows you to do a multiperson Teams call on a hands-free device.

This has led to a lot of companies and teams in construction getting very comfortable with wearable devices, so now we see a second wave of interested companies as they look to see what else is possible with wearable devices and want to move up to full augmented reality solutions.

Once you have a full 3D augmented reality device it is possible to project 3D BIM models onto your surroundings at scale. You can selectively look at the HVAC ducting, the electrical conduit runs, the sprinkler systems etc, and turn everything on and off selectively whilst walking through a full-size model.

Then you can bring in remote based colleagues to view the model and effectively hold a MS Teams or Google Meet to discuss what you all are seeing. That is demonstrably a considerable time and cost saving, plus companies don’t need to send their experts out all over the country for a series of single visits. It’s possible to visit multiple sites in a day from a central location.

AR is also a more adoptable technology by a wider team than say VR has been. Because you carry on seeing the world around you, you don’t feel isolated, you are not concerned by trip hazards etc, so is a more comfortable experience, hence a wider group are happy to adopt and use the technology, even senior executives who previously have been reticent to try VR devices.

The first time I took a mixed reality device into a company, one of the very senior guys came into the room to see what was going on, and immediately wanted to try it. We put a large scale 3D model into the space in the boardroom and he was able to walk up to it and review the design, plus he could turn to his team and discuss the design. It meant he was completely comfortable with the process and didn’t feel isolated. Barriers to mainstream Compared to VR headsets, AR headsets are still expensive. I asked Keith what will drive adoption of AR? Keith Russell: There’s a great difference between the adoption of an enterprise device and a consumer device. With a consumer device you are convincing customers to spend their own money for a set of features. For an enterprise device it is different. What I mean by that is, if your day-to-day job could be made easier by using a AR device, and the company is going to purchase it for you, and that company is going to train you on how to use it, you’re probably going to adopt it. Therefore, the cost of the device is only relevant to the company purchasing it on your behalf, and if they can see an immediate return on that investment it is an easy win all round.

The enterprise world is adopting AR on mass. For example, in production environments we’re seeing AR deliver worker instructions and train workers on the assembly line, showing them how to assemble a set of parts, which sequence the seals and washers go on, the torque settings for the bolts or the sequence that you have to put these cables in, all via content rich AR instructions.

Also it’s not just about the device, it’s

about the solution that solves a problem or saves a cost. Users shouldn’t keep fixating on the device, but look at the use cases and the solution. Our approach at Magic Leap is to reach out to companies and ask what problems we can help them with. How can a content rich AR experience enhance and augment their workforce or their design teams? Can we bring together teams in a 3D collaborative meeting environment when you can discuss ideas and concepts with co-presence? Those users also want that solution to connect to their back-end systems. It has to connect into their device manager, it has to connect into the Wi-Fi and IT systems, it has to be an enterprise ready solution. Currently I still see adopters worrying about the form factor of the device, but forgetting that it’s going to evolve. Magic Leap has already hinted that our next device will be lighter, with a bigger field of view, because hardware will always improve. As the form factor evolves, the device itself is not what users should be ‘‘ It’s not just concerned about because they are buying into that total solution.

about the device, it’s about the solution that solves a problem or saves a cost. Users shouldn’t keep fixating on the device, but look at the use cases and the solution

Keith Russell,

Magic Leap ’’

Conclusion

The augmented reality solutions that have been developed to date, have obviously found a home in more practical engineering and construction firms, requiring information at the point of need. Despite the drawbacks of the current generation of AR glasses, they are good enough for fieldwork. One only has to look at Trimble’s integration of HoloLens and a hard hat, although these are far from commonplace on building sites today.

It seems we need some new magical technologies and more billions spent before AR goes mainstream in common usage and here it might well be Apple creating glasses to extend its iPhone ecosystem. But as with all Apple products, these by definition will not be cheap.

Hopefully the technology developed will see commodity AR glasses in the next five years, probably the development being paid for by one of this decade’s popular social media platform companies.

I think it’s especially interesting thinking of 5G as being such an essential tethering technology to enable lightweight low power mobile devices, which can summon infinite computer power, wherever you are. Well, in saying that, now that I’m living in the Welsh countryside, I actually might have to wait a couple of decades.

Campfire Mixed Reality

It takes guts to enter a market where so much has been invested, with little commercial success. Campfire is a US start-up that has developed an AR and VR headset to take on the likes of Microsoft’s Hololens. By Martyn Day

Based in San Mateo, California, Campfire is a mixed reality start-up headed by CEO Jay Wright, an industry veteran. Having raised $8 million in venture capital, the company has recently emerged from stealth mode to give demonstrations of its augmented reality and virtual reality system. The unusual name comes from the idea of having a team sat around the campfire, having a conversation, sharing the same experience, brainstorming and setting the world to rights.

The company has a unique approach to delivering augmented experiences. It combines a headset, with a slick tracking device / target and a special accessory which turns smart phones into motion controllers. The software also enables multiple users either in the same room, or at remote locations, to see a design, enabling collaborative design sessions.

Wright has a long history in VR / AR development. He is the former vice president at mechanical CAD giant, PTC (Pro/ Engineer, Creo, Onshape), where he looked after the company’s VR and AR ambitions, especially its Vuforia software development kit (SDK).

Wright was also VP of Vuforia at Qualcomm, prior to PTC’s $65 million acquisition in 2015, and built the SDK

which was used by more than 450,000 AR developers. In many ways, Campfire is the culmination and combination of those experiences. As a product, the Campfire tethered headset (USB-C) is beautifully designed. It offers a 92° field of view and can be converted from augmented ‘‘ It’s amazing that a company with an $8 million reality to virtuality mode through the addition of a visor to block the AR transinvestment could come up with a product to parency. The design itself compete against the bottomless pockets of harks back to the low-cost Microsoft and the billions spent on Magic Leap Meta 2 headset design, which offered 2,560 × 1,440 ’’ (1,280 × 1,440 per eye) reso lution. The original developer of the headset, Meta Company, became insolvent in 2018. In fact, Campfire was originally called Meta View, which was set-up by a VC firm called OTV, to purchase the IP from Meta Company. This included the

AR headset design and a portfolio of over 60 patents. OTV then head hunted Wright from PTC.

The two unique takes that campfire brings are the ‘Console’ and ‘the Pack’. The console is a cross-shaped Target which sits on your desktop. It tracks the headset and also acts as the centrepoint for the holographic overlay. In many ways you could see this as being like the Polycom conference call unit. It acts as the benign focus for augmented reality sessions.

The pack is a device which clips to your phone, which turns it into a motion controller and pointer. The downloadable app for the phone provides controls for manipulating the model. This appears to be a sensible use of something which everyone has at hand. The software also supports Zoom and Teams.

The system comes with a software package to put it all together, called ‘Scenes’. It can load in over 40 CAD and 3D file formats and is used to author the data prior to the design review session.

It can be used to throw up a quick and dirty model for fast feedback, or used to generate slick elaborate designs for customer presentations. Wright describes Scenes as “Google Docs and Powerpoint.”

Other Campfire users are sent the authored environment and can then participate using their own Campfire headset, or tablet, phone or desktop. Using the Pack, each user can point to, select and highlight geometry in the scene. Other session participants can be seen as ghostly avatars with name labels. It’s also possible to add markups within the sessions.

The Campfire system is due for launch in Autumn this year and looks like it will be a subscription-based service. It’s currently being beta tested through a ‘pioneer’ program

Conclusion

Campfire’s primary interest is going to be in attracting users in product design. Although it has aspirations in the AEC market, all the demos we have seen have centred on the design of consumer goods.

Based on the colour palette and feature-tree interface of scenes, it is very reminiscent of product design / engineering systems such as PTC Creo and Dassault Systèmes SolidWorks. We have yet to see any BIM models displayed on the device.

While there are few details on the business model or exact costs, we know it’s based on subscription. This could be an interesting, and yet another unique, part of Campfire’s go-to-market. We do not know if there is a fixed cost for the hardware and the software is the subscription element, or if it’s the entire ecosystem for rent. We will have to wait till after the summer to find out.

At face value, it’s amazing that a company with an $8 million investment could come up with a product to compete against the bottomless pockets of Microsoft and the billions spent on Magic Leap. However, in knowing something now of Campfire’s origins, it’s clear to see a lot more money has been invested on foundation technologies to get Campfire to this point.

While being built on the shoulders and bones of previous investors, it does look like finally we are seeing advances in the AR field, albeit at an enterprise level.

AR has been a treacherous coastline for investment and development (see our conversation on AR with Soluis / edify and Magic Leap on page 40), and for that we respect any AR firm that makes it to the productisation phase.

■ campfire3d.com

edification for the masses

Covid-19 may have brought new challenges to the AEC industry, but many remain the same, such as attracting young talent to the industry and then training them effectively once they start. Could VR be the solution? asks Martyn Day

AEC firms don’t just face a skills gap, they also face general lack of candidates. Looking at the UK, Brexit has ensured that the free flow of labour to make up the shortfall has dried up. New immigration rules make looking outside our borders expensive and time-consuming.

The ONS released statistics in 2019 that show there are nearly 400,000 skilled construction trade workers aged between 50 and 65, all of which are due to retire within the next 15 years. For an industry that has had horrendous trouble in attracting young people to start careers, the deficit in the workforce is only going to get worse. Folks, let’s face it, we are looking down the barrel of a gun.

This challenge is not lost on Martin McDonnell, CEO + co-founder at UK start-up edify.ac (as well as founding Soluis and Sublime). He is one of our industry’s biggest proponents for the use of VR and high-end visualisation in AEC.

At the launch of edify, McDonnell shows us a video of his nine-year-old son, recorded in 2013, of him collaborating with a friend online using the 3D game Minecraft to design and model a building. “If we want the next generation to come and join our industry, it’s going to have to look a bit like, or maybe be slightly better than, Minecraft.”, he said. “So that’s what edify is intending to be. That’s what we’re building.”

Born in the pandemic, edify is a new training platform, built on top of Unity, that combines gaming know-how with VR, video conferencing and user-made content to produce immersive and engaging educational experiences.

Teachers can join students online in an immersive virtual environment to deliver lessons remotely or save them for ondemand access. Participants can interact and collaborate within the session.

The platform comes with a wide variety of VR environments, relatable ones such as a lecture theatre, or a lab, a Design for Manufacture and Assembly (DfMA) factory, or even the surface if the moon!

Whiteboards can be placed within these spaces and configured to show all different types of data – drawings, PDFs, BIM models, video, images. They can be interactive, such as pulling a 3D model out of a 2D drawing. edify can import BIM, CAD and GIS data, satellite imagery, point clouds, laser scans, photogrammetry and 2D/3D drawing formats, all for use within lessons.

The platform has been in beta test from the latter half of 2020, with many major Universities onboard - Oxford, Western Sydney, NU Australia, UCL, Edinburgh and Manchester, to name but a few.

There was a strong cohort of Building Environment focussed institutions and departments, which were especially keen to experiment with the software. Students can learn at their own pace and run lessons as many times as they want. Bringing the site to the classroom

Allison Watson from ‘Class of your own’ (classofyourown.com) has been working with edify to ‘bring the site to the classroom’. She is using the software to integrate VR learning with project work for the Design, Engineer and Construct (DEC) learning programmes (designengineerconstruct.com) for 11- to 18-year-olds.

In one VR project, students experience the stages of redevelopment of London’s Olympic Park. The lesson embeds a range of objectives that helps students understand the space they’re going to design, engineering and construct in.

It teaches basic map reading skills, site orientation, measurement and the need for accuracy. It allows them to calculate the slopes (DY/DX) and do the site set-out in VR.

The model includes all the infrastructure that’s there already; features such as buildings, roads, utilities, waterways, green spaces and trees. Watson explained, “Students even know what a tree preservation order is, but it was important for them to locate and tag the

We want to make sure students are able to understand the ambiguity about decisions that are made within AEC, concerning buildability, materials, thermal insulation, sustainability issues and building systems. All of these things could be saved, processed, reiterated in different [building] sequences in VR Linsey Thomson, assistant professor, Heriot-Watt University ’’

trees affected on site here on the park, just as they would in their own backyard. These are just a few of the features which our students can access on the edify reconstructed app.”

Timber construction

Matt Stephenson is founder and managing director of Ecosystems Technologies, a company dedicated to the digital transformation of the construction industry to applied innovation in timber technology (ecosystemstech.com). The company is using edify as a virtual knowledge library, a repository for content that exists across academia and democratise access to that for students and professionals, to support engagement with timber technology.

Lessons demonstrated included exploration of the construction of timber building modules, soundproofing technology using 2D drawings, BIM models and interactive whiteboards both in a VR classroom environment and in a simulated factory.

In VR it’s possible to show the whole fabrication process, so workers can really understand how the timber-frame buildings fit together at a component level. Stephenson explained, “With the edify platform, rather than viewing a static position, the individual can actually be there, immersed, ask questions, interrogate and really, really connect.”

Heriot Watt University

Linsey Thomson, assistant professor at the Heriot Watt campus in Dubai, explained that the specialist University frequently takes students to building sites across Dubai, up 50 level towers, to see the sort of work being done by contractors, consultants, building services engineers, landscapers, lighting etc.

When Covid hit, and site visits were not allowed, they took a 360 camera on site tours, so the data could be shared with the students via immersive technology.

In the future, Thomson could see students that are studying architecture, architectural engineering, civil engineering, real estate construction and project management being able to work in multidisciplinary groups, using edify and VR to simulate a building under construction, with all the issues that need addressing.

“We want to make sure students are able to understand the ambiguity about decisions that are made within AEC, concerning buildability, materials, thermal insulation, sustainability issues and building systems,” she said.

“All of these things could be saved, processed, reiterated in different [building] sequences in VR.”

Conclusion

While there might not be many upsides to a pandemic, adoption of digital technology which connects AEC teams or students, is seeing much quicker adoption than is typical.

With the pandemic dragging on, this is good timing. However, irrespective of the current circumstances, pre-Covid, on-demand learning was already recognised as having huge potential for both educational institutions and within organisations.

The crunch point will be how easy or hard it is to pull together the lessons in the system. In today’s universities, it has been an uphill struggle for most to get basic lectures videoed and uploaded, without the complication of 3D models and CAD data.

The one benefit this industry has is that BIM models are becoming ubiquitous. If edify has indeed simplified the inclusion of teaching materials and lesson composition, our industry could, if it so wished, choose to be a leader in VR-based learning, which should surely make a job in AEC seem more appealing than business studies and the prospect of a life in Excel spreadsheets! (sorry quantity surveyors)

12 tools for collaborative VR in architecture

From immersive modelling for conceptual design to design review, Greg Corke puts the spotlight on a dozen tools that allow AEC teams to collaborate effectively in the virtual world

Arkio

Arkio is different to most collaborative VR tools insofar as it focuses on conceptual design and modelling. Collaborators can sketch (pull/push) and markup designs in VR, AR and on mobile devices. Designers can also meet clients in the virtual space to explore design ideas in real time. Program categories can be applied to shapes to help meet client requirements on different floor areas. Models can be imported from (or exported to) Revit, Rhino and other 3D tools.

■ arkio.is

Dimension10 (Varjo)

Dimension10 focuses on collaborative design review and offers workflows for IFC, Revit, Navisworks, Interaxo and BIMsync for issue handling. The Norwegian firm was recently acquired by VR/XR specialist Varjo and will play a key role in advancing the Varjo Reality Cloud, which has a new take on collaboration. It uses the LiDAR scanner and cameras built into Varjo’s XR-3 mixed reality headset to scan a construction site and instantly teleport others to that same physical reality (see page 12).

■ dimension10.com ■ varjo.com

Fuzor

Fuzor is a Virtual Design and Construction (VDC) tool with a very broad range of functionality to support design through to construction. It includes a bi-directional link to Revit, Archicad and others; annotation and issue tracking; clash detection; point cloud support; design viz with animations, content library and physically based materials; 4D scheduling and construction simulation; Microsoft HoloLens support. And, of course, multi-user VR collaboration, plus lots more.

■ kalloctech.com

Revizto

Revizto supports collaborative VR, but the primary focus of the collaboration platform is on model co-ordination, BIM data insight and optimised issue tracking — from design to construction. Revizto works with VR headsets, but is now leaning more towards immersive workspaces like the Fulcro FULmax or Igloo Vision, which Revizto feels are better suited to true AEC collaboration. Revizto is compatible with a huge range of CAD/BIM tools, from SketchUp and Revit to Tekla and OpenRoads.

The Wild

The Wild was designed from the ground up for distributed AEC teams. It’s cross-platform, so collaborators can join from VR, desktop (macOS and Windows) or AR (iOS). There’s native sketching for ideation; annotation and voice notes for design review; ‘Tours’ to help tell a structured story during presentations; and integrations for SketchUp, Revit and BIM 360. New features coming soon: performance improvements, new software integrations and enhanced support for desktop and mobile.

Theia BigRoom

Theia BigRoom is designed to transform the traditional multi-media planning room into a virtual space for distributed teams. Built on Unreal Engine, there’s a big emphasis on visual quality. Users can join by desktop or VR. BigRoom is currently in beta, so details are thin on the ground, but the software should launch later this year. Features include scale model tables, presentation boards, live video monitors, material configurators, sketching tools, task lists, Post-It notes and more.

In the AEC space, VR has quickly evolved from a solo experience to one that can bring together distributed teams in a deeply immersive collaborative environment. For many AEC firms, this has proved particularly beneficial during the Covid-19 pandemic.

VR enhances design understanding and communication by enabling technical and non-technical folks to experience building and infrastructure project designs together in a true-to-scale environment.

Most collaborative VR applications focus on design/review, with markup tools increasingly being linked to issue tracking through integrations with third party software. Meanwhile, we are also starting to see the rise of 3D modelling in VR for collaborative conceptual design.

VR headsets have also evolved at pace. Workstation-tethered devices like the Oculus Rift, HTC Vive and HP Reverb remain popular as they can handle the largest models at the highest quality. However, the practicality of the standalone Oculus Quest has helped push out VR to a much wider audience - and with the Oculus link cable draw on the power of a workstation when required.

Of course, collaborative VR doesn’t have to be done with headsets. As Covid restrictions begin to lift, immersive workspaces like the Igloo Vision or Fulcro FULmax give the benefit of face to face interaction. They support a range of tools including 3Drepo, Navisworks, Solibri, Synchro and others.

Furthermore, you don’t have to use VR hardware at all. A growing number of collaborative VR tools are multi-platform, so all sorts of users can join with desktop PCs or mobile devices.

Mindesk VR

Mindesk allows teams to collaborate on the same Rhino CAD project through a multiuser VR session. This could be for design review, or for brainstorming new ideas. With a live link to Rhino there’s no need to convert model files. The software offers VR-based editing and supports NURBS and 3D Bézier curves for exploring organic forms. For users of Grasshopper, sliders can be used to directly edit the script in VR, to evaluate and compare parametric designs.

■ mindeskvr.com

IMAGE COURTESY OF KUKA GROUP MINDESK. 3D MODEL COURTESY OF TURBOSQUID AND ARTIST KRISTIJAN ILIC, KASIOPY STUDIOS

Trezi by SmartVizX

Trezi is an ‘immersive design collaboration platform’ with a focus on connecting designers with building product manufacturers. Virtual meetings can be scheduled just like Zoom and participants can join in VR or desktop mode. BIM models can be furnished with building products, such as furniture and lighting, taken from virtual 3D catalogues, then different materials applied - all in VR. In the future Trezi will be accessible on mobile devices, and there is also strong potential for AR and MR.

Prospect by IrisVR

Prospect by IrisVR was the first VR tool we saw and we were instantly impressed by its optimised Revit to VR workflow. Since then it’s evolved to add multi-user meetings, integrations for Navisworks, Rhino and Sketchup, and collaborative issue tracking. Earlier this year, Prospect was acquired by The Wild (below left), but both products are still being actively developed with a view a delivering a ‘fully integrated XR collaboration platform that serves the entire AEC project lifecycle.’

■ irisVR.com

Unity Reflect Review

Unity Reflect Review for design review allows users to collaborate on models from Revit, BIM 360, Navisworks, SketchUp, and Rhino through a very broad range of platforms, including VR, AR, Windows, Mac, iOS and Android. Unity recently added annotations so teams can track outcomes and improved the visual fidelity with an enhanced real-time engine. Models are now ‘automatically optimised’ with data prep technology from Pixyz to improve general performance.

Resolve

Resolve is a collaborative VR tool that is proving popular with owners of industrial facilities for design review. According to the developers, this is because it can handle very complex BIM models on standalone VR hardware without the need for manual model clean up or downsizing. For example, it can take a 500M polygon model from BIM 360 and open it on the Oculus Quest, by only streaming in the relevant parts of the model from the headset’s permanent storage.

■ resolvebim.com

VREX

VREX is focused on OpenBIM workflows for collaborative design/review, from concept to construction. It supports IFC, BCF and JT and has integrations with Bimtrack, BimCollab, Navisworks and others to help track issues and clashes. Everything is managed through a web app so there’s no file sharing, which helps reduce admin and protect project data.

Support for point clouds is coming soon and VREX is also exploring VR streaming for very large projects with Nvidia Cloud XR.

Concept collaboration with Arkio

The hype-cycle for Virtual Reality is over; it’s now time for practical applications to benefit the way we work. What would happen if an architect could design a VR system to solve some of today’s design and collaboration problems? You no longer need to ask that question, writes Martyn Day

Researching interesting technology to show at AEC Magazine’s NXT BLD event is always an enjoyable part of the year for me. A few years ago, I came across a video of a very embryonic design system from a young Dutch architect, Johan Hanegraaf, who had taught himself how to program in Unity and had developed an immersive VR modelling tool for architects.

What impressed me was not just the functionality, but also the intuitive user interface. We’ve had Hanegraaf back at the conference each year to demonstrate how the original idea has developed and progressed towards an actual commercial product. The result of this is called Arkio (arkio.is) and it launched last month.

Arkio is a modelling and collaboration application which runs across multiple platforms – PC, Mac, iOS, Android, Oculus Rift and Oculus Quest. It is also available on Steam. It can run standalone or as a real-time collaboration platform for interactive design analysis, customer meetings, or for the assessment of multiple design possibilities.

While it has a fun and easy-to-use modelling system, it supports the import of content from Autodesk BIM 360, SketchUp, Rhino and Revit, as well as supporting common formats such as CSV, Open 3D street map buildings, 2D street maps, OBJ and PNG/JPG. This means you can assemble VR environments from a number of sources and create datasets for any collaborative design requirement.

There are three configurations to choose from. The entry point is free; just download and start modelling using Arkio’s tool palette. You can import images and street maps (up to 25 MB). You can even host a 1:1 meeting for up to 20 mins. The free version also acts as the viewingonly licence when invited to participate in a collaborative session.

The Pro licence is $55/month and, in recognition that no one person will be using it all the time, is ‘floating’ so can be used by any team member. Arkio Pro can host meetings of up to 24 people and import 3D geometry from commonly used design packages (excluding BIM 360). There is no file size limitation.

The Enterprise option costs $85/month and adds support for Autodesk BIM 360 (import and export). I suspect most users will go for the Pro version, as they will want to work with geometry imported from their design weapon of choice. Revit support, as you would expect, is especially strong and there is a Revit plug-in which eases export and optimisation of geometry from Revit to Arkio. ‘‘ Being originally conceived by an architect and Once your session in Arkio is complete you can export the geometry back to widely beta tested by architectural and Revit as native geometry construction firms, it’s safe to say that and families. functionality will always be pertinent to users This is not to knock the free version; it is a great ’’ place to dive in and learn the ropes. While there are desktop versions, the real experience is to go immersive and here the Arkio team recommends the Oculus Quest 2. The experience The first thing that strikes you about the Arkio environment is the controllers, which are represented by virtual hands holding virtual Oculus controllers. By twisting your left hand, wrist up, the menu system appears. This can then be re-positioned anywhere in the 3D space and its sub-menus can be satisfyingly ripped off and placed in the workspace. Object snapping is automatically

1 2 Arkio’s Intuitive user interface, with floating menu system, here adding some of the built-in 3D content. The model is located on a map layer, with two collaborators in the active session 3 Arkio is not just for VR. It also runs on PC, Mac, iOS, and Android

switched on (although it can be toggled off) and lines from the controllers vibrate as they cross over geometry snap points - this can be turned on or off for more detailed modelling work. I have to say I absolutely love the user interface of Arkio. It’s so graphical and easy to understand and surprisingly deep. For a first release, Arkio has a lot of depth and capability.

Arkio has also implemented the experimental hand tracking capabilities of the Oculus. Sensorily, this means in the future users will be able to just use their hands without the need for controllers, which would give a much more intuitive interaction.

The virtual world consists of an infinite gridded floor with blue sky. Up front is a handy table workspace, on which your designs are to be modelled and edited. Using the controllers in combination with controller buttons, it’s possible to scale, zoom, pan, teleport anywhere within the modern space and, if you’re not too careful, off the model space!

The menu allows editing, deletion, painting, sketching, moving, measuring, 3D creation and splitting. You also have control over the camera and the position of the Sun (which casts wonderfully Minecraft-like shadows). You can create sections, maps and set saved views.

Arkio comes with a small selection of materials, architectural textures, landscape, interior textures, as well as content like some funky turquoise trees, people and interior furniture. It’s also possible to import content. Arkio’s built-in assets are mapped to work with Enscape, so the models should play nicely when rendered in Revit with Enscape.

The built-in geometry creation is mainly based on primitives, biased towards rectilinear forms with squares and triangles, which can be glass or solid and are fairly smart once placed near other objects.

Using the controllers, it’s possible to create negative space (cut outs) within solids, to build more complex structures. It’s also possible to move the controller to touch planes and geometry to select edges and faces.

At all times Arkio displays coordinate measurements of selected geometry, for handy feedback while modelling. Using the controllers to create geometry is definitely a skill to be acquired. However, it does come with time. I would suspect it would be easier for many to use the desktop version of Arkio with a mouse to create more intricate 3D shapes. Over time, I imagine more complex modelling capabilities will be added.

The upside of the current workflow is you can bring in more complex geometry from SketchUp, Rhino or Revit. The

geometry you create using Arkio are converted to mass families or generic shapes when imported back into Revit. Design requirements can be loaded into the system from Excel. This can be used to create modular massing elements which align and snap together when assembled in the model space as masses can be exported back to Revit or Excel.

One of the important things about early-stage design is being able to look at a variety of design options. It’s easy to save models and start new sessions by duplicating previous designs, to come up with a different variation. There is also a history mode which enables you to go back in time through each design edit, so it’s impossible to totally screw up! Collaboration I guess it’s possible to define Arkio as a common data environment. It can import from most popular design tools and be used to assemble in situ models for project analysis. By supporting up to 24 people, meetings can handle quite a crowd! If you have a Pro or Enterprise licence you can be an editor of the model. Those with the Free version can view and navigate without editing rights. Each collaborator appears as a head-form in the virtual space and audio is also handled by the system.

Starting a collaborative meeting is straightforward. You create a room, which can be password protected and invite your team. There are tools to mute collaborators, including yourself. If someone is pointing out something they’re saying in the model, it’s possible to click on the menu and jump to their view, as well as being able to bring everybody to your location. Obviously users on all types of supported devices can take part in these collaborative sessions. Conclusion Arkio is a unique take on VR-enabled design and collaboration. It’s really fun to use and easy to learn. For a first release it really does have a strong set of capabilities and is remarkably stable. The Free version is very generous and the Pro version, with floating licence, is great value for money. Given the low cost of the Oculus Quest headset, it’s something that most architectural and construction firms could easily rollout across design teams.

With a very strong opening version, derived from extensive testing, I look forward to seeing the next iterations of the product. Being originally conceived by an architect and widely beta tested by architectural and construction firms, it’s safe to say that functionality will always be pertinent to users. It will be interesting to stress test the system with

detailed models to see how performance changes with load. I’d like to see all sorts of modelling enhancements, like workplanes and customisable, cylinders, grids and curves, but I don’t think the intention is to really make this a modelling tool which would replace Rhino or anything similar. I also hope Arkio can become an ecosystem for all types of analysis tools. As the geometry is collated together in a fast environment like this, it would be great to see companies like Test Fit able to drive generative designs and have more complex simulations running. By concentrating on creat‘‘ Arkio is a unique take on VR-enabled design ing a strong link to Revit, Arkio is setting itself up as a fantastic adjacent seat, or and collaboration. It’s really fun to use and adjacent headset. It would be easy to learn. For a first release it really does have a strong set of capabilities great to automatically always have an Arkio version of a Revit model so if ’’ you needed to look at it, at any time, you could simply view it in Arkio VR without having to wait around. Company founders Johan Hanegraaf and Hilmar Gunnarsson will be at NXT BLD this Autumn at London’s QEII Centre (nxtbld.com). Who knows what additions will come to the software in the next few months? ■ arkio.is Pre-Arkio demo at NXT BLD ■ tinyurl.com/nxtbld-VR Last year’s demo of Arkio at NXT BLD ■ tinyurl.com/NXTBLD-VR2

Hands on with the Oculus Quest 2

The Oculus Quest 2 is the second version of the ‘untethered’ VR headset. It’s battery-powered and standalone, which allows you to roam around a user-defined physical space, with a digital wall which alerts you should your arms or head leave the reserved area.

The Oculus Quest 2 comes in two versions: one with 64 GB of storage, priced at £299, and one with 256 GB, priced at £399. We have yet to see a good reason to go for the bigger memory, it just depends on how many applications you want to store.

It’s incredibly easy to set up and just needs to connect to the local WiFi. The most difficult part of set up is ensuring the goggles are in the right place to give the best image to your eyes. The sweet spot is very small and getting the head straps right, and not too tight to be uncomfortable, is a skill which needs time to acquire.

The headset features a microphone, has speakers in the straps and comes with a spacer to help those who wear glasses. There are built in cameras so you can see the real world in greyscale if needs be.

The 1,832 x 1,920 pixel resolution per eye display is pretty sharp and smoothly tracks head movements and the two hand controllers. The screen refreshes at 90 Hz and there are rumours of a 120 Hz firmware update coming soon. Typically, the battery lasts 2-3 hours, but you could strap a portable battery to your head to extend this, as it can charge and run at the same time.

Overall, the Oculus Quest 2 is a very impressive piece of kit for the price. For pairing with Arkio, it was brilliant and we would highly recommend it for a collaborative design session.

The one issue is the whole Facebook thing. To use the device you have to register with your Facebook account, and it just makes you wonder what kind of data they are logging in your usage. Not something you’d want to leave switched on!

■ oculus.com/quest-2

This article is from: