10 minute read

Future reality

Next Article
Workstations

Workstations

Future reality – from ContextCapture to AI

Greg Corke reports on the latest developments in reality modelling at Bentley Systems, including streamable, scalable reality meshes and automatic mesh classification using deep learning.

Reality modelling at Bentley but more detail would be automatically For interiors, Bentley recently partSystems has come a long way in streamed in as you zoom into the model. nered with GeoSlam to streamline the a very short time. It was only a In a plant, for example, you could even workflow between ContextCapture and few years ago that the company read the small print on a safety notice, GeoSlam’s handheld devices that capture was talking up point clouds as being the providing the data has been captured with laser scans and photos in real time as you new fundamental data type, just like 2D a high-res camera. This could be extreme- walk through a building. This is a great vectors, 3D solids and 2D raster. Now, eve- ly useful for inspection or asset manage- alternative to static laser scanners that rything is about the reality mesh and ment with the creation of a digital twin. can be time-consuming to set up in each using it to capture site conditions, as-built Bentley describes 3SM as a more com- room, while taking photos of walls with projects or even entire cities. plete solution that solves many of the no textures, or reflective surfaces can

The beauty of the reality mesh is that problems of other formats commonly cause challenges for photogrammetry. you don’t need sophisticated surveying used to provide context for engineering However, as Bentley’s Francois Valois equipment. For a small project, simply projects. For example, DTM and TIN are admits, GeoSlam’s technology is not suittake one hundred or so simple photo- nowhere near as scalable, while STM ed to all workflows. “It won’t necessarily graphs from the ground and the air and 3MX are not considered ‘engineer- give you engineering grade data, but it using a drone and you can have an engi- ing-ready’, so they can’t be used to calcu- will definitely give you speed and it is perneering-ready dataset in no time at all. Reality meshes are every‘‘ The beauty of ContextCapture is that it isn’t fect for asset management.” Data quality where and crop up in virtually every conversation you have with a Bentley exec, fussy about where it gets its source data – reality meshes can be crafted from point One of the key challenges with ContextCapture is ensuring that the quality whether it’s about road and clouds as well as photographs and coverage of the photorail or buildings and design viz. There’s a genuine excitement about the potential of the late accurate quantities. ’’ graphs you feed into the software will create a good quality model. With drones, particularly technology, but equally, the delivery 3SM also has the benefit of being able when working in confined spaces, the mechanism, which is enabled through to handle different data types and a scala- challenges are even greater, so Bentley the company’s Scalable Mesh technology, ble mesh file can contain terrain data has partnered with Drone Harmony, a encapsulated in the 3SM format. from many sources such as DTM, point company that offers an Android app to

Scalable meshes allow you to stream clouds, raster elevation and more. help easily plan out flight paths. multi-resolution meshes on demand, The software automatically gives the automatically, to desktop or mobile devic- Reality capture right overlap between photos and can fly es. With no theoretical limit on size, they The data for reality meshes can be around complex buildings, maintaining a can change the way designers, contrac- acquired from many different sources. On constant distance, which is important, as tors and visualisers work with incredibly a road project, for example, photographs it will define the resolution of the mesh. large reality modelling datasets. could be captured with an Unmanned Bentley has also developed a 3D resolu3SM files can be exported from the new Aircraft System (UAS). For smaller scale tion tool in ContextCapture that allows ContextCapture Connect Edition soft- projects, an Unmanned Aerial Vehicle users to easily assess the quality of the ware. Data can be streamed on demand (UAV), commonly known as a drone, mesh data. In simple terms, areas of the from ProjectWise ContextShare at a reso- might be equipped with multiple data cap- mesh that are coloured ‘green’ are good, lution appropriate to the scale at which ture devices, including an SLR camera, a meaning the data was derived from the the project is being viewed. Low- laser scanner or a thermographic imaging most photos or was close to the laser scanresolution data would be shown when device. All of this data can be fed into ner, whereas things that are coloured red viewing a road corridor in its entirety, ContextCapture to create the mesh. are ‘bad’. If you are using the reality mesh

to make engineering decisions, it can help you judge what the data can be used for.

In ContextCapture Update 8, out in 2018, it will be possible to use photos only for texturing the mesh and not for creating the 3D model, which is useful when using photos and laser scans combined. A smart algorithm will decide whether the photos or the laser scan is better, but the user will still be able to override.

Engineering and construction Reality meshes can be used for design, engineering, construction, maintenance, inspection, visualisation and more. You can accurately measure coordinates, distances, areas and volumes with ease.

Bentley is also developing task-specific tools, such as one that can analyse volume differences between two reality meshes (or a reality mesh and a design model) and view the cut or fill quantities. Practical applications include tracking construction progress, quality control or even to check the accuracy of contractor billing. Bentley also plans to extend this type of verification technology to building design and BIM models.

Mesh production The creation of mesh data from photographs using ContextCapture takes lot of compute power. Bentley is well aware of this and dedicates significant development resources to optimise the engine and reduce processing time. ContextCapture Update 7 is out later this year, and Bentley execs reckon it is more than 30% faster than Update 5, with some projects offering a 50% increase.

The processing is done using Nvidia CUDA GPUs (not CPUs) and can be spread across multiple GPUs and multiple machines.

However, for firms that don’t want to invest in powerful GPU hardware, Bentley also offers the ContextCapture Cloud processing service, which allows organisations to process images and point clouds and create a number of deliverables including reality meshes, orthophotos, and digital surface models.

1 Reality mesh of roadway generated with ContextCapture cloud processing service. Image courtesy of AECOM 2 QR code framework for capturing ground control points automatically. Image courtesy of ABJ Drones 3 ContextCapture cloud processing service can be used to accelerate the generation of the reality mesh. Image courtesy of AECOM and Bentley Systems 4 The ContextCapture Mobile application can be used to capture smaller objects. Image courtesy of Bentley Systems

The service is scalable on demand, meaning more engines can be applied to urgent jobs in order to get results back quicker. Simply define quality and speed and the cloud processing service does the rest.

Job submission can be done through the ContextCapture Console client, a desktop client that lets you connect to the cloud and upload your photos.

Georeferencing can be done by importing ground control points as a text file, then manually picking the points in the photos. ContextCapture also now offers a QR code framework to capture ground control points automatically. Simply place printed QR codes at known locations on site, prior to capturing the images, and the software will automatically find them in the scene.

Bentley also offers a ContextCapture Mobile app that lets you create 3D models from images taken with your phone or tablet. As mobile compute power is limited, all of the processing is done in the cloud. Simply upload your photos, then once the mesh has been processed it can be displayed on your device. This application is more appropriate for creating meshes of smaller objects and is well suited to inspection, or even to assess cracks.

Mesh distribution The cloud isn’t only good for brute-force processing; it’s also great for managing and sharing reality modelling data.

The new ProjectWise ContextShare cloud service can stream reality mesh data to desktop tools like MicroStation or Descartes, mobile tools like Navigator or even to a simple web browser. It means everyone instantly has access to the latest data and there’s no need to distribute giant files. Moving forward, ProjectWise ContextShare will also be able to handle point clouds and images.

In the future, Bentley will more tightly link reality modelling data to the iModel Hub, a new technology than maintains a ‘timeline of changes’ or, if you prefer, an accountable record of who did what, and when.

Deep learning Automation is the future and Bentley is currently exploring different ways of using Artificial Intelligence (AI) to get more out of reality mesh data.

At YII in Singapore last month, Bentley Fellow Zheng Yi Wu shared details of two R&D projects that use the AI technique deep learning. He explained that, in the last two years, computers have become better at recognising images than humans, based 3D resolution maps can starting with trees and roads – on an error rate of 5%, and this help you judge what the and then learn how to apply has led to big leaps in AI mesh data can be used for. Image courtesy of those classifications to the rest development. Bentley Systems of the reality mesh.

The first project is designed to automatically detect and quantify Conclusion cracks on infrastructure projects. It feels like there is an unstoppable Following image acquisition, the process momentum behind reality modelling at involves post processing, feature extrac- the moment. The technology was credited tion, edge detection and then quantifica- in almost twice as many nominated protion of the crack – not just, here’s a crack, jects at the Bentley Be Inspired awards but also how long and how wide that this year, compared to 2016, and 21 of the crack is. Zheng showed the crack detec- 55 finalists used reality modelling to base tion tool working on several use cases their engineering work on digital context, including pavements, buildings and said Greg Bentley in his keynote. bridges. By automating the detection pro- The beauty of ContextCapture is that it cess and automating the drone survey, isn’t fussy about where it gets its source this technology should make it much data – reality meshes can be crafted from quicker and easier to prioritise mainte- point clouds as well as photographs. nance – and, of course, to identify issues However, it’s the ease with which photobefore it is too late. graphs can be captured by drones – and

The second project is exploring the auto- then re-captured on a weekly, if not daily, matic classification of reality mesh data. basis – that makes the technology so comCurrently in MicroStation or pelling. And then when it comes to distriContextCapture Editor, you are able select bution, with colossal scalable meshes able an area within the mesh and classify it as a to be streamed on demand into its design, specific type of object. Zheng Yi Wu’s team asset management and visualisation tools, has been using deep learning to train a sys- Bentley is leading the charge. tem to automatically recognise objects – ■ bentley.com

This article is from: