GIM - Issue 4-2024

Page 1


Digital twins: then, now, next

Pushing the boundaries of virtual replicas and technological innovation

Automatically generated 3D models of 10 million buildings

How geospatial data enables the rollout of fast internet

Monitoring the historic Dutch government centre restoration

Director Strategy & Business Development

Durk Haarsma

Financial Director Meine van der Bijl

Technical Editor Huibert-Jan Lekkerkerk

Contributing Editors Dr Rohan Bennett, Frédérique Coumans, Lars Langhorst

Head of Content Wim van Wegen

Copy Editor Lynn Radford, Englishproof.nl

Marketing Advisors Myrthe van der Schuit, Peter Tapken

Circulation Manager Adrian Holland

Design Persmanager, The Hague

GIM International, one of the worldwide leading magazines in the geospatial industry, is published five times per year by Geomares. The magazine and related website and newsletter provide topical overviews and reports on the latest news, trends and developments in geomatics all around the world. GIM International is orientated towards a professional and managerial readership, those leading decision making, and has a worldwide circulation.

Subscriptions

GIM International is available five times per year on a subscription basis. Geospatial professionals can subscribe at any time via https://www.gim-international.com/subscribe/ print. Subscriptions will be automatically renewed upon expiry, unless Geomares receives written notification of cancellation at least 60 days before expiry date.

Advertisements

Information about advertising and deadlines are available in the media planner. For more information please contact our marketing advisor: myrthe.van.der.schuit@geomares.nl.

Editorial Contributions

All material submitted to Geomares and relating to GIM Inter­na­tion­al will be treated as unconditionally assigned for publication under copyright subject to the editor’s unrestricted right to edit and offer editorial comment. Geomares assumes no responsibility for unsolicited material or for the accuracy of information thus received. Geomares assumes, in addition, no obligation to return material if not explicitly requested. Contributions must be sent for the attention of the head of content: wim.van.wegen@geomares.nl.

Milan’s digital twin project

City digital twins are shaping the urban environment, creating a dynamic and holistic model. The Italian city of Milan, a global hub of commerce, fashion, finance, research and tourism, has embraced this vision, resulting in a comprehensive digital twin of the metropolitan area.

Launch of the GeoAI Research Center

Ghent University has launched a research centre aimed at addressing geospatial problems by enhancing the spatial reasoning and analysis capabilities of AI models. It is an interdisciplinary community focused on balancing benefits with ethical considerations and security measures.

Monitoring Dutch government centre renovation

One of Europe’s most prestigious renovation projects, Fides Expertise monitors the impact of demolition, construction, and renovation, measuring noise, vibrations, deformations, and groundwater levels with traditional surveying technology, including total stations.

Flood risk assessment via innovative geospatial data usage

This article explores how Lidar and imagery data are being used in new ways to strengthen urban resilience and flood preparedness. It also looks at the role of crowdsourced geospatial data, a growing resource that complements traditional professional datasets.

Monitoring the Baltic shoreline using ALB

Measuring in the Baltic Sea’s coastal zone is challenging. Over the past decade, topographic laser scanners and GNSS RTK receivers have monitored Poland’s southern coast. This article explores the feasibility and accuracy of using airborne Lidar bathymetry (ALB) for both seabed and land surveys.

How geospatial data enables the rollout of fast internet

The digital economy is growing rapidly, with data consumption set to surge. Fibre-optic networks are vital but challenging to roll out quickly. This article highlights how the geospatial industry, especially mobile mapping and Lidar, helps unlock high-speed fibre-optic internet.

Automatically generated 3D models of 10 million buildings

3DBAG has made large-scale automatic reconstruction of detailed 3D building models a reality. Developed by TU Delft’s 3D Geoinformation group, 3DBAG is an open, freely accessible data ecosystem with periodically updated models of all buildings in the Netherlands.

Integrating connectivity and UAVs

In this interview, Thomas Eder discusses Nokia’s strategic move into high-end UAV solutions for sectors like mining and oil & gas. Eder emphasizes Nokia’s collaborations with geospatial industry leaders to create integrated, turnkey solutions that push the boundaries of innovation.

Geomares

P.O. Box 112, 8530 AC Lemmer, The Netherlands

T: +31 (0) 514-56 18 54

F: +31 (0) 514-56 38 98 gim-international@geomares.nl www.gim-international.com

No material may be reproduced in whole or in part without written permission of Geomares. Copyright © 2024, Geomares, The Netherlands All rights reserved. ISSN 1566-9076

Cover story

This bumper-packed Intergeo edition contains a wide variety of articles from several different perspectives, but with one thing in common: they all illustrate the power of geospatial data as a pillar for so many end products –such as digital twins, 3D city models and fast internet – that each drive society forward in their own way. The geospatial industry covers such a broad spectrum of solutions and applications that, if it were a painting, it would be a very colourful one. This is depicted by the front cover of this issue. (Image courtesy: Adobe Stock)

Anniversary!

Dwelling on the past isn’t a wise thing to do in the geospatial business. The business is very much forward-looking and about new technology. Still, I want to reminisce a little. Looking back, I suddenly realized it was 20 years ago that I travelled to Stuttgart to visit my very first Intergeo, in 2004. At the time, I was in the last days of my old job as a radio reporter and was about to start a new job as assistant publisher of GIM International. At GIM International, they thought it was a good idea for me to visit the largest trade fair in the industry, to get familiar with the business and its people. So there I was, I didn’t know anything or anyone, and just started to walk the floor…

From the very first moment, I was impressed by the impact of geotechnology on modern society (without ever being aware of it beforehand) and by the friendliness of the people working in the industry. Not much has changed ever since. In 2004, the theme of Intergeo was ‘Für mobile Menschen’ (in English: ‘For mobile people’). Hagen Graeff was still president of the DVW, and a lot of the exhibitors at that edition are still household names today.

Fast-forward 20 years and we are now talking about artificial intelligence and cloud solutions, enhanced graphics processors advancing the visualization and modelling of complex geospatial data and their impact on the sector. Obviously the agenda of the trade fair and conference is being adapted to the evolving needs of the geospatial industry, says Christiane Salbach, managing director of DVW, who works closely together with the current president, Rudolf Staiger, and organizer Hinte. Just as 20 years ago, new technology has a lot of impact on the sector itself, but even more importantly it is now helping tackle global challenges such as the climate crisis, sustainable planning

and construction, and urbanization. The societal impact of ‘geo’ has grown over the years, which is a great feature of the friendly business we’re all working in.

To tie in with Intergeo, we have put together this very diverse and extra-thick issue of GIM International for our readers, including interviews, feature articles and news from the business and our partners. We have also an additional issue for you as a bonus: the Geo-matching 10-year anniversary edition! Ten years ago, we started the product sourcing and news platform that has grown into the present Geo-matching. Via this platform, hundreds and hundreds of companies are showcasing their products to many thousands of interested professionals and buyers all over the globe. Over the course of ten years, Geo-matching has evolved into the online marketplace for the worldwide geobusiness. Reason enough for us to celebrate!

As mentioned above, for me personally this edition of Intergeo is a bit of an anniversary edition. Just as during all the editions I’ve visited in the past 20 years, I am already looking forward to meeting old and new friends, discovering new technology in action, and discussing opportunities for customers to grow their business with GIM International and Geo-matching. Come and find my colleagues and me on stand number A1.053, or just say hello as you pass by. See you soon!

Durk Haarsma

EAASI

and MAPPS

join forces to elevate the geospatial industry

UK government awards Bluesky exclusive aerial photography contract

Bluesky has secured a multi-million-pound contract to provide aerial photography and geospatial data across Great Britain.

The USA’s Management Association for Private Photogrammetric Surveyors (MAPPS) has introduced a partnership with the European Association of Aerial Surveying Industries (EAASI) to jointly promote and advance the geospatial industry within their respective communities. This collaboration will enhance the missions of both organizations through the exchange and transfer of relevant knowledge and expertise. Key initiatives include promoting workforce development within the geospatial industry, and serving as thought leaders to shape and influence the field’s strategies and business practices. MAPPS and EAASI will work together on event promotion, workforce development efforts, webinars and increasing the general awareness of each association’s activities. “This partnership marks a significant milestone for both our organizations and the geospatial industry at large. Since their formations, MAPPS and EAASI have been at the forefront of advancing the geospatial industry in the USA and Europe, respectively,” says MAPPS President Kelly Francis. “By joining forces, we aim to promote the activities of our organizations, enhance the awareness and utilization of private-sector companies, and uphold the highest standards of industry best practices.”

Bluesky International has secured a multi-million-pound contract from the British government’s Geospatial Commission to provide aerial photography services to public-sector organizations across Great Britain. This contract, known as Aerial Photography for Great Britain (APGB), also includes the supply of 3D height models and colour infrared data. Valued at over £15 million, the contract spans three years with an option to extend for an additional two years. Under this agreement, Bluesky, based in Leicestershire, will be the exclusive provider of aerial data on a biennial cycle. This ensures that over 4,500 local authorities, emergency services, environmental agencies and central government departments will have access to the latest aerial imagery and height data at no cost to the end user. This marks the first time the contract has been awarded to a single supplier, positioning the innovative Ashby-based aerial survey company at the forefront of aerial data provision in Great Britain. “The award of this contract is a great achievement and is testament to the high standards of work we produce and the professionalism of our team,” says Bluesky’s CEO Rachel Tidmarsh. “Bluesky celebrated its 20th year just a few months ago, and this contract highlights the importance of the investment and innovation we have carried out over this time.”

Seoul Metropolitan Government collaborates on hyper-realistic digital twin project

Techtree Innovation has announced the signing of a business agreement for the production of a hyper-realistic 3D map, and the discovery and joint use of national services with the Seoul Metropolitan Government. The agreement signing ceremony was attended by approximately ten people, including Seoul City’s Digital Policy Officer Park Jin-young and Techtree Innovation’s CEO Choi Seung-yup. Under the business agreement, Techtree Innovation and Seoul City will cooperate on the development of national services using the construction of a hyper-realistic digital twin based on a game engine that can reflect various physical information and variables. In particular, during the initial period regarded as a demonstration, Techtree Innovation and Seoul City will construct a hyper-realistic digital twin covering a total of 4km2 in the Yeouido area of the South Korean capital city. They will implement disaster safety, disaster prevention, roads, traffic simulations and national services, and reveal them at the Seoul Smart Life Week held at COEX in Seoul in October 2024.

EAASI and MAPPS are teaming up to take the geospatial industry to new heights.
Seoul City’s Digital Policy Officer Park Jin-young (left) and Techtree Innovation’s CEO Choi Seung-yup (right) after signing the business agreement.

Mosaic and Esri partnership enhances integration of terrestrial imagery

Mosaic, a leading provider of high-end terrestrial imagery hardware, has formed a strategic partnership with Esri, the global leader in geographic information system (GIS) software, aimed at enhancing Esri’s ArcGIS platform with Mosaic’s advanced 360° imagery technology. Mosaic’s partnership with Esri brings industry-leading 360° imagery to the forefront of geospatial data visualization within the ArcGIS platform, specifically enhancing the Oriented Imagery visualization tool. This collaboration allows Mosaic’s camera imagery to support Esri’s powerful GIS software, benefiting customers of both companies. “By utilizing Oriented Imagery within ArcGIS, users can manage, visualize and explore diverse image collections in a map context, providing unparalleled detail and immersion. This significantly advances spatial analysis, situational awareness and decision-making support. Mosaic customers will be able to visualize their imagery using the best GIS software solution available, while Esri customers will benefit from the superior visualization capabilities offered by Mosaic’s high-resolution imagery,” states Dylan Faraone, regional director at Mosaic.

Mobile mapping vehicle equipped with the Mosaic Meridian 360° system. (Image courtesy: Mosaic)

In memory of remote sensing visionary Professor Gottfried Konecny

Professor Gottfried Konecny, ISPRS Honorary Member and former head of the Institute of Photogrammetry and GeoInformation in Hannover, Germany, passed away peacefully on Thursday 25 July 2024 at the age of 94. Regarded as the European father of remote sensing, Prof Konecny played a key role in shaping his field worldwide in the second half of the 20th century. From the beginning of his work in Hanover in 1971 until his retirement in 1998, he inspired generations of students with his passion for photogrammetry, remote sensing and GIS, and continued to make his mark on the industry until shortly before his death. Prof Konecny was the principal investigator in the Metric Camera Experiment, involving the world’s first photogrammetric space camera, which was flown on the NASA Space Shuttle in 1983. Further highlights of his career included the 1980 ISPRS Congress, organized under his leadership in Hamburg, and his ISPRS presidency from 1984 to 1988. In recognition of his many achievements, Prof Konecny was awarded the Federal Cross of Merit, First Class in 1990. He also received honorary doctorates from the National University of Tucaman (Argentina), the University of New Brunswick (Canada), Anna University (India) and the Moscow State University of Geodesy and Cartography (Russia). With his passing, the geospatial community has lost a dedicated scientist and a brilliant mind.

Scalable Accuracy and Mapping Solutions from Juniper Systems

The Geode GNS3 GNSS Receiver from Juniper Systems has a simple, one-touch operation to get connected and start collecting location data. With a one-time RTK activation, users can achieve down to 2cm accuracy. Compatible with Android, Windows and iOS operating systems, users can map with the Geode using their own device.

As part of a total solution, the Geode can be paired with Juniper Systems’ rugged hardware from our Mesa Rugged Tablet Family with both Android and Windows options to our most recent Archer 4 Rugged Handheld running Android 14. Our Uinta Mapping and Data Collection software is the final piece for optimal field data collection. Uinta is user-friendly, fully customizable and allows users to map data, attach metadata including photos to a location, and save data both locally and back up to the cloud.

Professor Gottfried Konecny passed away at the age of 94.

RIEGL begins production of new UAV Lidar subsystem

RIEGL has officially moved into series production of the RiLOC-E, a fully integrated subsystem for localization and orientation, designed for its miniVUX-series UAV Lidar sensors. The first units, equipped with this innovative subsystem, were dispatched from the RIEGL facility in Horn, Austria, at the end of June. This milestone follows an intensive development and testing phase. The RiLOC-E was first introduced at Intergeo 2023, where it immediately attracted significant market interest. As a result, RIEGL intensified its efforts to bring the product to market, leading to this successful production launch. “The business is off to a good start,” confirms Michael Mayer, managing director of RiCOPTER UAV GmbH, who oversees sales and support in this segment. “Many inquiries in the UAV-based laser scanning sector come from beginners who demand high data quality but lack extensive experience in drone-based data acquisition. With the RIEGL miniVUX-3UAV equipped with RiLOC-E, we now offer a survey-grade, yet easy-to-use solution that allows users to gain their first experience in UAV-based laser scanning – all at an excellent cost-benefit ratio.” RiLOC-E is a sophisticated localization and orientation subsystem integrated into the RIEGL miniVUXseries of UAV Lidar sensors. This special version of the miniVUX-

Peter Rabley takes the helm as OGC CEO

The Open Geospatial Consortium (OGC) has announced the appointment of Peter Rabley as CEO. He will build on OGC’s 30-year history, while responding to the pressing need for the consortium to be agile at a time of rapid technological and societal change. Rabley brings to OGC a wealth of experience from the private, governmental and not-for-profit sectors, including in venture financing, and in developing and implementing scaleup strategies for international not-for-profits. “I am excited and honoured to be appointed as OGC’s CEO,” he says. “OGC is well positioned to build on its incredible 30-year legacy by responding to the ever-increasing rate of change seen in technology and society alike. Opportunities and challenges have never been more apparent, and I see tremendous potential for growth in new markets around the globe. This is the time of geospatial.” Prashant Shukle, who serves as chair of the OGC Board of Directors, comments: “I am particularly excited to have Peter Rabley leading OGC. Peter has a proven track record in the public, private and not-for-profit areas of the geospatial industry, and closely aligns with what our key stakeholders and partners were telling us they wanted in a CEO. The OGC Board took the time to get this selection right, and we are very excited about how he fits with our plans for a reinvigorated and repositioned OGC.”

SYS features a micro-electromechanical systems (MEMS) inertial measurement unit (IMU), a GNSS unit and tailored software, all housed in a compact, lightweight casing directly attached to the miniVUX1UAV or miniVUX-3UAV laser scanner.

Planet Labs renews collaboration with Taylor Geospatial Institute

A graphic rendering of a Pelican satellite from Planet’s new imagery satellite line. (Image courtesy: Planet Labs)

Planet Labs, a prominent provider of daily Earth data and insights, has extended its partnership with the Taylor Geospatial Institute (TGI) until the end of 2026. This contract marks Planet’s largest direct collaboration with a university consortium, highlighting TGI’s growing influence in geospatial research. The multi-year agreement enables TGI to harness a range of Planet’s industry-leading products, including the flagship PlanetScope data, a comprehensive data archive, high-resolution SkySat imagery, and advanced Planetary Variable analytics. Established in 2022, the Taylor Geospatial Institute is a pioneering force in the geospatial field, dedicated to driving research, fostering collaboration and creating impactful solutions. TGI unites eight leading universities and research institutions in a consortium that seeks to amplify cooperation, connect with partners across the geospatial ecosystem, and strategically align expertise, resources and unique strengths to generate cutting-edge research and innovation. The access to Planet’s data equips TGI researchers to tackle critical challenges such as national security and global food security, while also advancing key initiatives in geospatial science, computational methods and artificial intelligence. Moreover, this partnership supports TGI’s commitment to educating and preparing the next generation of professionals in the geospatial field.

RIEGL miniVUX-3UAV with RiLOC-E fully integrated with the DJI M350 RTK UAV platform. (Image courtesy: RIEGL)
Peter Rabley, the new CEO of OGC.

First images from Maxar’s WorldView Legion satellites revealed

Maxar Intelligence has unveiled the first images from its next-generation WorldView Legion satellites, with the initial pair launched from Vandenberg Space Force Base, California, USA, on 2 May 2024. On 16 July 2024, one of these WorldView Legion satellites captured high-resolution, 30cm-class images, showcasing the exceptional capabilities of this advanced technology. The images feature detailed views of key urban areas, transportation routes and logistics hubs in San Francisco and Sacramento, California. These high-resolution satellite images are invaluable for precision mapping, site monitoring, geospatial analytics and a variety of other critical applications. “In today’s increasingly complex world, our customers need access to faster, more timely geospatial insights to support their most critical missions – from precision mapping to site monitoring to space domain awareness,” says Dan Smoot, CEO of Maxar Intelligence. “Soon, our WorldView Legion satellites will be collecting vast amounts of imagery, extending our collection capacity advantage for high-resolution imagery and enhancing the revisit rate of our industry-leading constellation. This added capacity will also strengthen our geospatial foundation, helping us build more sophisticated products that unlock the full potential of geospatial data and generate more actionable insights in 2D and 3D.” The two satellites are part of a set of six WorldView Legion satellites. Once in orbit, these six satellites will triple Maxar’s capacity to collect 30cm-class imagery, enable dawn-to-dusk collection, and allow Maxar to image the most rapidly changing areas on Earth as frequently as every 20 to 30 minutes.

Trimble unveils top-tier GNSS system

The new Trimble R980 GNSS system introduces several new elements, such as enhanced communication capabilities for uninterrupted field operations, building on the advanced features of Trimble’s latest receiver models, including the Trimble ProPoint positioning engine. The Trimble R980 integrates top-tier Trimble global navigation satellite system (GNSS) technologies, making it a highly effective tool for land surveying, transportation infrastructure, construction, energy, oil and gas, utilities and mining projects. Notable features include Trimble’s exceptional ProPoint GNSS positioning engine and inertial measurement unit-based tilt compensation using Trimble TIP technology. These capabilities allow for efficient work in dense urban settings and under tree canopies, eliminating the need to level the pole when capturing data points. Highlighting the importance of hardware connectivity, the R980’s communication capabilities include a dual-band UHF radio and an integrated global LTE modem for receiving corrections from a local base station or VRS network. Thanks to operation on 450MHz, 900MHz or LTE bands, users have flexibility in how they receive and transmit RTK corrections, adapting to site conditions. This ensures work can be completed in diverse environments, regardless of GNSS infrastructure, while simplifying radio licensing requirements. Additionally, the LTE modem, replacing the 3G version in previous models, prevents work disruptions as 2G and 3G networks are phased out.

San Francisco City Hall, the seat of government for the City and County of San Francisco, captured from above by one of the nextgeneration WorldView Legion satellites. (Image courtesy: Maxar)
Trimble’s R980 GNSS system. (Image courtesy: Trimble)

Pythagoras launches Pointorama for streamlined point cloud management

Pythagoras, a provider of innovative software solutions for land surveyors and contractors, has released Pointorama, a groundbreaking cloud-based platform designed to simplify point cloud management. The official launch took place at GeoBusiness 2024, which was held on 5-6 June in London, UK. In today’s rapidly evolving field of land surveying, the quest for efficiency is paramount. However, progress has long been hindered by a persistent obstacle: the cumbersome task of point cloud cleanup. Pointorama emerges as a potential game changer, aimed at addressing this challenge head-on and reshaping the approach to handling point cloud data. Surveyors invest considerable time in data collection using precision scanners, only to confront a significant bottleneck during post-processing. These scanners often capture unwanted elements such as reflections, noise or irrelevant objects, leading to extensive manual cleanup efforts. This not only extends project timelines but also escalates costs, highlighting the pressing need for innovative solutions like Pointorama to streamline workflows in the industry.

3Dsurvey 3.0 boosts field efficiency and accuracy for surveyors

Helping surveyors to get the most out of geospatial data, and allowing them to spend much less time in the field. Those are the two main aims of the latest release of 3Dsurvey, known as the all-in-one photogrammetric software solution. In addition, Version 3.0 offers a wide range of new possibilities and improvements – including through a number of paid, add-on modules – in line with the company’s mission to offer a powerful spatial data processing tool in a simple-to-use and all-in-one platform. The software developers working on Version 3.0 began with several key questions: How much could a simplified alignment process improve mapping and surveying professionals’ productivity and reduce labour costs? How would better accuracy in 3D model alignments eliminate errors and reduce rework costs? And what cost savings and project efficiency gains could be achieved by integrating various data sources into a single project? “With this software, we enter a new chapter in optimizing workflows for geospatial professionals. Our solution is made by surveyors, for surveyors; it is designed by professionals who have encountered real-life field conditions and comprehend the exact requirements of modern digital surveyors,” says Luka Pušnik, sales director of 3Dsurvey.

A screenshot showcasing Pointorama, the cloud-based platform designed to streamline point cloud management. (Image source: YouTube/Pythagoras)
This image shows return data visualization in the 3Dsurvey 3.0 software suite. Multiple returns from a single laser pulse indicate reflections from different surfaces, with the first return usually representing the highest object (e.g. treetop, building roof).
(Image courtesy: 3Dsurvey)

Pioneering a new era of urban management

Milan’s digital twin project

Imagine a city that can be seen, measured and managed from the office rather than through burdensome and time-consuming on-site visits. A city that can respond to its inhabitants’ needs while optimizing its resources. A city that can anticipate and solve problems while creating new opportunities for innovation and collaboration. This is not a fantasy, but a reality. City digital twins are shaping the urban environment, creating a dynamic and holistic model. The Italian city of Milan, a global hub of commerce, fashion, finance, research and tourism, has embraced this vision, resulting in a comprehensive digital twin of the metropolitan area.

The project to create a digital twin of Milan involved the Municipality of Milan and four leading geoinformation companies: Compagnia Generale Ripreseaeree (CGR SpA) for aerial survey, CycloMedia for mobile mapping, Esri Italia SpA for data management, and Servizi Informazione Territoriale (SIT Srl) for topographic survey. The Laboratory of Geomatics at

the University of Pavia also participated in the data quality analysis and assessment. The resulting digital twin offers a rich and diverse dataset, capturing various aspects of the urban environment from different perspectives. It combines nadir and oblique aerial images, Lidar points and terrestrial mobile mapping data to create a dynamic and comprehensive city model. The project

also delivers innovative products, such as true orthophotos (RGB and colour infrared [CIR] imagery), classified Lidar point clouds, DTM and DSM models, and a database of 22 urban objects such as traffic lights, outdoor seating, gates and road signs. The project covered an area of approximately 1,575km2 , including 133 municipalities and the city of Milan. The aerial survey used the Leica City

Figure 1: The aerial survey covered the entire metropolitan area, while mobile mapping was limited to the Municipality of Milan.

Mapper-2 hybrid sensor, which captured both nadir and oblique images and collected Lidar point clouds. The acquisition requirements specified that the aerial images must have a 5cm resolution and the point cloud density must be at least 20 points/m2. The survey produced more than 434,000 images and over 220 billion points, requiring more than 50TB of storage space. The mobile mapping system (MMS) focused on Milan, where CycloMedia’s sensor covered almost 2,600km of roads. The MMS data had to have an image resolution of 8mm per pixel or less, and a point cloud density of at least 1,500 points/m2 (both thresholds refer to data acquired at 10m from the vehicle path). The MMS data occupied more than 10TB of memory, including the imagery, point cloud and a database of about 1.2 million elements.

High-quality ground control network

One key component of the Milan digital twin project is the high-quality ground control network that ensures the accuracy and reliability of the data. The project team set up 200 control areas across the metropolitan area, each consisting of two markers about 100m apart. One marker is a stable topographic nail that serves as a geodetic reference, while the other is a photogrammetric marker painted on the ground. The flat area around the second marker allows for evaluating the Lidar vertical quality and density.

The Milan project partners opted for conventional static measurements to establish a trustworthy ground network for data quality assessment rather than using NRTK GNSS survey technology. This choice resulted in an average root mean square error (RMSE) of 8mm for the

East, 9mm for the North, and 13mm for the Up direction. These results are well-suited for assessing the quality of the photogrammetric processing, given that the images have a ground sample distance of 5cm. Moreover, a terrestrial laser scanner was used to survey 50 out of 200 control areas to analyse the horizontal and vertical components of the Lidar data. Lastly, the project team used photogrammetry to collect 2,000 data points along the city’s main roads. These coordinates were compared with those obtained from the MMS survey to assess the accuracy and consistency of the MMS with the aerial dataset.

Comprehensive aerial and terrestrial surveys

Aerial data is another vital project component, with photogrammetric and Lidar data being captured to cover the whole metropolitan area. The quality of these datasets and the main aerial products was carefully assessed using various parameters and criteria. One of the innovative aspects of the project was the optimal management of overlaps. Modern cameras are distinguished by a wide variety of focal lengths and sensor sizes, which influence the field of view (FOV) and thus challenge the assumption that fixed percentages of overlaps are universally applicable. Photogrammetric blocks designed with fixed overlapping may then be more or less vulnerable to perspective obstructions connected to the FOV, and the phenomenon of ‘building leaning’ (or ‘apparent inclination of buildings’) must be considered.

Figure 3: The aerial products are the classified Lidar point cloud, the DSM and DTM, and the RGB and CIR orthophotos.
Figure 2: An example of a control area in which a topographic benchmark and a photogrammetric marker are present.

Another interesting project result was the higher-than-expected point density of the Lidar data, which was around 55 points/m2 instead of the required 20. This was due to the hybrid system allowing simultaneous imagery and points to be acquired. This higher value derives from the sensor’s characteristics (spiral-shaped scan pattern with a tilted axis) and the block design. The system enables the contextual acquisition of imagery and points; photogrammetric planning prevails on Lidar, resulting in increased point acquisition due to the imagery’s more considerable overlap.

The quality of the imagery, Lidar data and main aerial products was then assessed. The results show a high level of performance: the bundle block adjustment achieves an accuracy of 1.8cm and 3.7cm for the horizontal and vertical components, respectively. This corresponds to almost 0.5 and 0.75 of the ground sample distance (GSD). The vertical precision and accuracy of the Lidar data are 2.0cm and 4.8cm, respectively, while the horizontal precision and accuracy are 8.7cm and 12.2cm, respectively. Based on the quality of the acquired data, the products demonstrate an equivalent level of excellence. The orthophotos, both RGB and CIR, present a 2D error of 2.5cm, while the DSM and DTM have vertical accuracy of between 4 and 5cm. Lastly, the Lidar point cloud classification was thoroughly examined to ensure correct attribution to the 11 categories specified in the tender. These categories include underground, ground, electric lines, bridges, water, buildings, high vegetation, medium vegetation, low vegetation, outlier, and overground (meaning artificial objects not belonging to other categories).

Mobile mapping data similarly constitutes an essential element of Milan’s digital twin project. It facilitates the examination of the city in areas that pose challenges for traditional aerial surveys, particularly those characterized by narrow streets or dense vegetation, as commonly found in Italian urban areas. Moreover, mobile mapping is the primary data source for creating the urban object database. Therefore, a thorough evaluation is required concerning the resolution of panoramic images and the density of the generated

points cloud. Both requirements have met the specified threshold of 8mm per pixel for image resolution and 1,500 points/m2 for point density (at 10m for the vehicle path). One of the project’s core principles is to use both aerial and terrestrial data to gain complementary perspectives and improve the accuracy of our analyses. It is essential to ensure alignment between these datasets,

About the authors

Marica Franzini is a research technician and adjunct professor at the University of Pavia, Italy. She specializes in geomatics, focusing on topography, photogrammetry and remote sensing. She is also involved in various interdisciplinary research projects and educational activities.

Vittorio Casella is an associate professor of geomatics at the University of Pavia, Italy. Throughout his career, he has primarily focused on innovative technologies and quality data assessment in aerial surveying, particularly Lidar, GNSS/IMU sensors and digital aerial cameras

Bruno Monti is a professional at the Municipality of Milan, working in the Technological and Digital Innovation Directorate. He specializes in geographic information systems, contributing to the development of the Territorial Information System, and has provided consultancy for the Polytechnic of Milan.

Figure 4: An example of data integration within the CycloMedia web app.

and this alignment must be thoroughly assessed. A total of 2,000 points were measured using photogrammetry and compared with the MMS data. The results show a satisfactory RMSE of approximately 7cm horizontally and 14cm vertically, indicating strong compatibility between the two data sources.

Software tools for accessing complex datasets

One of the key aspects of the tender’s technical specifications was the provision of advanced software tools for easy access to complex datasets. This is a significant improvement in optimizing urban data management and facilitating collaboration among local entities. Two main tools were proposed to achieve this goal. The first tool is a web app developed by CycloMedia that allows users to view, analyse and share geospatial information in a user-friendly way, which is helpful for urban planning and resource management. The second tool is a plugin for the Esri ArcGIS environment

that enables the integration of complex datasets into existing workflows, thus improving the efficiency and accuracy of geospatial analysis. These innovative solutions will allow the Municipality of Milan to make well-informed and timely decisions based on accurate and up-to-date data. In short, implementing these software tools marks a crucial step in urban data management, fostering more collaboration and information sharing among various local departments. By adopting these technologies, Milan will be better prepared to face future challenges and improve the quality of life for its citizens.

A catalyst for social change

Milan’s digital twin project represents a state-of-the-art example of how geospatial data can create a realistic and dynamic representation of a city, enabling better governance, planning and innovation. By integrating different types of data from aerial and terrestrial surveys, the project provides a comprehensive and accurate view of the urban environment, its features and its changes over time. The project also demonstrates how advanced software tools can facilitate access to and analysis and sharing of complex datasets, enhancing collaboration and decision-making among various stakeholders. The project’s outcomes will benefit the Municipality of Milan and the citizens, businesses, researchers and visitors who live, work or interact with the city. The project is a technical achievement and a model for the future. It shows how a digital city can offer new possibilities for improving the quality of life, the sustainability and the resilience of urban areas. It also shows how a digital city can foster creativity, experimentation and participation, stimulating new ideas and solutions for the common good. The project invites us to imagine a world where every city has a digital twin, where data becomes a valuable asset for enhancing urban development, and where technology is a catalyst for social change. The project challenges us to embrace this vision and make it a reality.

Move point clouds to the next level

The engineering expertise behind safeguarding a national treasure

Monitoring Dutch government centre renovation with

tenth-of-a-millimetre accuracy

It is one of the most appealing and prestigious renovation projects in Europe: that of the historic government centre of the Netherlands, the ‘Binnenhof’, in The Hague. Fides Expertise is responsible for monitoring the impact of the demolition, construction and renovation work, and is measuring noise, vibrations, deformations and groundwater levels using traditional surveying technology, including total stations. Fred Pannekoek, director of Fides Expertise, took GIM International on a crawl-through, sneak-through tour of this immense project.

The Binnenhof in The Hague is a monumental building complex that has its origins in the Middle Ages, and has been expanded over the centuries. Meanwhile, the Senate and House of Representatives, Council of State and the Ministry of General Affairs reside in the complex, which totals some 4,000 rooms, spaces, chambers, halls, corridors, attics and cellars. The total floor area is almost 90,000m2. Renovation of the complex was necessary to ensure healthy and safe workplaces for everyone who works there, but also to make the building – as the heart of Dutch democracy – more accessible to citizens. The renovation is mainly about necessary repairs of defects in fire safety, wood rot, leaking roofs, technical installations

and security. There are also a number of functional improvements. With a new public entrance, the Lower House will soon be ready for a flow of 500,000 visitors a year.

The renovation is a major operation carried out by consortia of well-known contracting companies in the Netherlands, supplemented by smaller specialist construction and demolition companies. The Rijksvastgoedbedrijf (State Property Agency or ‘RVB’) is responsible for the renovation and hired Fides Expertise to take care of continuously monitoring the impact of all the demolition, construction and renovation work. Fides Expertise has been active in all three sub-complexes at the Binnenhof (the

Senate, the House of Representatives and General Affairs) since 2021 – at first only with foundation inspections, monitoring wells, measuring bolts and prisms to register displacements, then gradually this grew into an advanced and complex monitoring system.

Founded in 2009 by director Fred Pannekoek together with Iris Remmé and Daniel Sijtsma, Fides Expertise is a structural damage expertise and consultancy firm. Currently with 16 employees and owning a total of 17 total stations, the firm has years of experience in risk inspection, monitoring and reporting. For instance, the roof construction of the plenary hall of the Dutch Senate had been monitored periodically by Fides Expertise for years, when the State Property Agency knocked on the door in 2018 asking if the firm would also do foundation inspections for the benefit of the renovation of the Binnenhof.

Everything started with a preliminary report in 2019. This was based on a comprehensive survey of the foundations of the old complex in the city centre of The Hague. Additionally, interviews were conducted with stakeholders to know what measurements needed to be made: what did people want to monitor, what factors were important? The outcome of the interviews, merged with Fides’ expertise, resulted in a complextranscending basic monitoring plan.

Monitoring plan

The basic monitoring plan has been developed across complexes and is comprehensible to all parties involved in

Surveying the waterside façades across the ‘Hofvijver’ lake as part of the renovation of the Binnenhof complex.

the renovation. From the chosen set-up, the risk of damage to the monumental buildings due to miscommunication, for example, can be reduced to zero. From a joint plan supported by the four design teams (structural engineers ABT, Arcadis and Arup, in combination with the monitoring party Fides Expertise), the objective of preventing damage will have a greater chance of success. For each section of the complex, the monitoring requirements for all subprojects are worked out in detail. Limit values are set in consultation with the structural engineers, and the high-quality measuring equipment to be deployed (as an extension to the Trimble T4D monitoring software including advanced analysis and alarm capabilities) and measurement frequency with which the monitoring will be carried out are specified.

Supplemented by a clear decision protocol, immediate action will be taken in the event of signals and/or intervention values being reached. The measured values are compared with the alarm and limit values set out in the monitoring plan before and during implementation. If signalling and/or intervention values are approached during implementation, technical consultation will follow. After analysing the measurements, it will be decided whether additional measures may be needed in the implementation to minimize the potentially harmful impact of changes such as different groundwater flow (modes), vibrations or deformations.

Consequence of exceeding

The effect of exceeding the intervention value has two categories. Category 1 includes exceedances that cause damage (groundwater level, settlements x,y,z, vibrations SBR-A and SRB-C) and Category 2 includes exceedances that only cause nuisance (noise and SBR-B vibrations). If Category 1 is exceeded, the work is stopped immediately. In case of Category 2 violations, Fides Expertise first contacts the project supervisor to determine whether the work must be stopped immediately or whether the work can be continued, possibly with the implementation of restrictive measures. In case of relevant overruns, the first contact is contacted. If they cannot be reached, the responsibility regarding the consultation and resulting decisions is immediately and without loss of time transferred to the second contact. These back-up contacts are also all defined per discipline.

Separately from the short line of communication between the project supervisor, the contact person at Fides Expertise, the structural engineer and the contact person at the contractor, the project supervisor also informs the building control surveyor of the Municipality of The Hague. The management manager of the complex section in question organizes an expert meeting between the structural engineers, Fides Expertise, the building control surveyor and other relevant parties as soon as possible to determine the next steps. Naturally, the competent authority has its own and independent role in this.

remarkable that the external values are also incorporated into the software system. This gives us an extra dataset and greater confidence in the monitoring process.” (Image courtesy: Fides Expertise)

Fred Pannekoek explains the link between external and internal measurement points: “It’s
Noise monitoring instrument installed at the Stadhouderspoort, at the southwestern entrance of the Binnenhof.

Exceeding the set limit values has only happened once so far, says Pannekoek: “When the concrete pedestal of the fountain at the Binnenhof had to be removed, a mechanical vibrating drill was used. When we received a report that several measurements had a value outside the permitted values, the work was stopped immediately. Later, the work was resumed with other equipment that caused less vibration. The communication lines and warning system worked perfectly. It was a good test.”

Measuring set-up

The monitoring of the renovation of the Binnenhof, aimed at keeping an eye on vibrations, noise, deformations and groundwater levels, is done by an ingenious system of total stations and prisms, crack meters, measuring bolts, noise, vibration and moisture meters, and groundwaterlevel meters. It is mainly the set-up of total stations in combination with prisms that meticulously monitors the big picture to check for subsidence or shifts. Eight

A visualization of how the total stations at the Binnenhof are interconnected within a unified coordinate system.

total stations have been set up: six at the Binnenhof, one in the plenary hall of the Senate and one on the Mauritstoren

tower to link outdoor reference points in the software to all measurements at the Binnenhof.

The total stations measure a total of 327 prisms affixed throughout the Binnenhof complex (and some in the Senate Chamber and outside on buildings at the Buitenhof). Needless to say, the prisms have been fitted so that they do not leave marks when they are removed. At each measurement point, at least two prisms have been attached pointing in different directions. This is so that they can be approached by two total stations, and any blockages do not create a gap in the 24/7 measurement series. The S9 Trimble total stations are deliberately complemented with Trimble prisms. Pannekoek believes this benefits the measurement results, which are all extremely precise: “We leave nothing to chance. We measure changes down to tenths of a millimetre.”

A total station forms the link to the outside, with a number of checkpoints on buildings on the Buitenhof bringing all measurements inside, into a coordinate system. All the data comes together in Trimble’s T4D software. The men from Fides Expertise set up the T4D software so that all measurements are merged into one coordinate system, with the individual total stations monitoring each other. The Trimble dealer Geometius and a specialist

Two of the eight total stations currently monitoring the monumental buildings at the Binnenhof around the clock.

from Trimble helped with some fine-tuning and further optimizations. The monitoring platform in which the measurement results of the groundwater level monitoring, crack meters, tilt meters and the weather station are presented, can be accessed with one login and all can be compared in the system. “It’s a special achievement to set up and use this software in this way. We are very happy with the help of Geometius and the contacts they have, which enabled a specialist to come over from Germany to help set everything up properly. It was a laborious job for him too, but it worked out,” comments Pannekoek.

Incidentally, the measurements from the moisture, vibration and noise meters come in other software. These measurements are local in nature, measuring one component at one point: moisture, vibration, noise or groundwater level. Pannekoek points out that this is the first time that things like vibrations and changes in a span of the plenary hall of the Senate, for instance, are being tracked: “A building is always moving, and now that we are tracking it so accurately, we record everything. That includes changes due to wind, weather, seasonality and associated dryness or humidity. You have to know how to separate that kind of ‘natural’ change from changes resulting from the interventions now being done in the complex.”

From outside to inside and vice versa

In the vicinity of the Binnenhof building complex, there are other monumental buildings, local residents and invaluable masterpieces. In fact, paintings by Dutch masters from the 17th century, including Johannes Vermeer’s Girl with a Pearl Earring, hang in the Mauritshuis, less than a hundred metres as the crow flies from some of the hard demolition, construction and renovation work. Monitoring by Fides also keeps track of any nuisance and damage to the surrounding area. For this purpose, noise and vibration meters have been installed on and in nearby buildings, for use as baseline reference points to control the points collected inside the Binnenhof. The connection between outside and inside is thus constantly monitored. The RVB is in constant consultation, not only with local residents but also with the Mauritshuis management, including to pick up any complaints and try to remedy them. Very little noise from machinery can be heard on the construction site, as

About the author

Durk Haarsma, co-founder and director of strategy & business development at Geomares, the publisher of GIM International, is a seasoned editor and journalist with extensive experience in the geospatial industry. He regularly contributes insightful articles and conducts interviews, leveraging his long-standing involvement in the field.

electrical equipment is used. Even Pannekoek himself continues to be amazed by the fact that the construction site is so quiet.

Maintenance

On a project this prestigious, performance and quality are crucial. Therefore, to avoid the risk of technical issues, the total stations are returned to Geometius every three to five months, depending on the operating circumstances. “Some equipment stands measuring 24 hours a day in all weather conditions. Due to the wind, sand and dust particles get into the axles, making the total stations run less smoothly and increasing the likelihood of wear and tear,” explains the director of Fides Expertise.

Continuation

The Binnenhof renovation project will take several more years and it is possible that unexpected issues may crop up during the project. In any case, it means that the Binnenhof will remain the workplace of Fred Pannekoek and his crew for the time being. They are not always on site, by the way; much of the data can be read remotely. But in case of changes in the set-up of the equipment or developments in demolition or construction, there are definitely a few Fides employees in The Hague. The Fides employees – who are not geodesists and surveyors, but monitoring experts – are proud of their work. Pannekoek: “I have zero knowledge of land surveying, but the deployment of total stations traditionally used for that profession is a golden opportunity that is working out very well, partly due to the cooperation with Geometius and Trimble. That we get to do this is great. Because let’s face it, it’s a once-ina-lifetime experience.”

Meticulous inspections and studies have been conducted in preparation for the necessary foundation reinforcements.

TRIMBLE R980 GNSS SYSTEM

Elevate survey productivity with unmatched performance and Trimble trusted workflows. With a full suite of enhanced connectivity features, the Trimble ® R980 GNSS system sets a new benchmark in versatility and enables you to tackle the most demanding GNSS environments to get the job done.

geospatial.trimble.com/r980

© 2024, Trimble Inc. All rights reserved. GEO-235

Leveraging advanced technologies for coastal zone management

Monitoring the Baltic shoreline using airborne Lidar bathymetry

Due to the specifics of the Baltic Sea, performing accurate measurements in the coastal zone is not an easy task. For the past decade, topographic laser scanners have been used for the periodic monitoring of the sea’s southern coast in Poland, in addition to profiles using GNSS RTK receivers. This article outlines the work to verify the feasibility and accuracy of using airborne Lidar bathymetry (ALB), both on the seabed and on land.

The southern coast of the Baltic Sea is characterized by great variability. Due to storms, sandy beaches are becoming wider and wider in some places, while in other places they are simply disappearing. The desire to prevent these effects has driven the need to permanently monitor changes and respond accordingly. Annual monitoring of the state of the sea shore is a statutory obligation, and is the responsibility of the directors of Poland’s Maritime Offices. For almost a decade, the Maritime Office in Szczecin and the Maritime Office in Gdynia have commissioned flights using topographic laser scanners. Topographic scanners are hard to beat when it comes to tracking land changes; they enable the entire coastline to be mapped with high accuracy. Unfortunately, however, the laser pulse in the near-infrared range is not capable of penetrating water. This results in a lack of complete information about the coastal zone. Such information is not provided by height profiles either. Theoretically, a solution to this problem could be the use of an airborne bathymetric scanner. But can the results achieved with this type of measuring equipment meet the expected accuracy in such a difficult body of water as the Baltic Sea?

The challenges of the Baltic Sea

Due to the specifics of the Baltic Sea, performing measurements in the Baltic coastal zone is not a simple task. Each measurement method has its own advantages and limitations, but they become particularly significant in the case of this

(Source: A guide to bathymetry, UK Hydrographic Office)

body of water. For example, because the Baltic Sea is a shelf sea, it is particularly difficult to map its bottom in the coastal zone using multibeam echosounders. Hydrographic boats generally have a greater draught, so it is impossible to use them. On the other hand, the use of a shallow-draught survey boat requires measurements to be made in near-ideal weather conditions. Otherwise, the undulations of the Baltic Sea could cause the boat to capsize.

Therefore, the most common method of measuring the coastal zone of the southern Baltic Sea has been to make profiles using a GNSS receiver. The land surveyor’s task is then to make cross-sectional profiles, within which measurements are taken of the dunes, beach and in the water – as deep as possible. This technology is undoubtedly sufficient for the implementation of measurements over small distances, because it is extremely mobile and independent of weather conditions. However, the potentially high waves of the Baltic Sea pose a problem.

Low water clarity

Aerial bathymetric scanners are known to be ideal for mapping mountain lakes or clear oceans and seas. In fact, many suppliers boast about their scanner’s ability to reach depths

Figure 1: Comparison of mapping performance of a multibeam echosounder and ALB.

of several dozens of metres in clear water. But the Baltic Sea certainly does not fall into this category; it is a body of water with relatively low water clarity.

An additional factor affecting bathymetric measurements from an aircraft is wave action. The Baltic Sea can be turbulent, and the frequency of waves can be much higher than in oceanic waters. All of this makes taking measurements difficult, but not impossible. The key is to have a good understanding of the environment and constantly keep track of changing conditions in the air and on the water.

Testing the feasibility of ALB

Another factor affecting the popularity of a particular measurement method is its reliability. Technologies that have been known and well established for decades are approached differently than niche methods that are relatively new. Having said that, it should be noted that some first attempts to map the southern Baltic coast using ALB were made as early as 2007. Unfortunately, their results were not commensurate with the cost of acquiring the data at that time. That trend began to reverse in recent years, when public contracts for monitoring selected sections of the Polish coast began to appear. GISPRO SA carried out one of the pilot projects in this regard under an order from the Maritime Office in Szczecin. The purpose of the undertaken work was to verify the feasibility of using ALB on the Baltic coast and, more importantly, to verify the accuracy of measurements using ALB both on the seabed and on land.

About the authors

Grzegorz Szalast is a historian and archaeologist. He is currently the director of sales and business development at GISPRO SA and the owner of ArchService. Over the years, he has been responsible for utilizing TLS, MMS/MLS, UAV and GPR technologies. He is passionate about Lidar technology and close-range photogrammetry. In his spare time, he creates digital twins of monuments.

Marta Sieczkiewicz is currently the president of the Research and Development Center at Gispro Technologies Sp. z o.o. and the director of the Department of Aerial Photogrammetry at GISPRO SA. Her professional activities particularly focus on aerial survey, as well as environmental remote sensing analysis.

Technical conditions and equipment

Profiles were to be taken at least every 200m from the dune/cliff crest, across the beach and under the shore, up to a distance of 500m into the sea (from the shoreline). Measurements in profile were to be taken every 5-10m and at characteristic locations. The measurement process was to be a continuum, using a GNSS RTK receiver on land and shallow water, and a singlebeam echosounder while in the water. Aerial scanning was to be performed for the entire 10.4km stretch of the coastline, both for the land surface and the bottom, with a density of no worse than 6pts/m2 for the last return.

A RIEGL VQ-880-G II was chosen as the airborne laser scanner, which emits laser pulses in the green (532nm) and nearinfrared (1,064nm) ranges. It has integrated PhaseOne XM-100 cameras, and is also equipped with an advanced Trimble Applanix AV-610 GNSS positioning system with a highend IMU-57 inertial unit. The SonarMite BTX singlebeam echosounder (SBES), integrated with a Trimble R8 GNSS RTK receiver, was

installed on a lightweight survey boat. A Trimble R12 GNSS RTK receiver was used to measure profiles on land and in the shallow water.

Measurement campaign and products

The measurements were taken on 30 October 2022 under good weather conditions. The sky was overcast with clouds, but there was no precipitation. The wind blew at a speed of only 1m/s. Bathymetric laser scanning was performed from an altitude of 530m AGL at a speed of 100kn. The result was a point cloud with a minimum density of 6.42pts/m2

Measurements using the SBES were made after calibration using a calibration bar. After sound velocity profiling (SVP), the average velocity in water was determined to be 1,465m/sec. After data processing, photo sketches were prepared showing profiles taken with the GNSS RTK receiver and SBES, RGB CIR orthophotos and a georeferenced point cloud for the coastal area and the underwater part.

Figure 2: Average annual Secchi disc depths based on data from 1990-2005. (Image courtesy: Lindgren and Håkanson, 2007)
Figure 3: ALB point cloud with GNSS RTK-SBES elevation profiles. (Image courtesy: GISPRO SA)

Evaluating ALB accuracy

As part of the work carried out, the accuracy of measurements taken by various techniques was verified. To this end, after classifying the point cloud from the bathymetric scanner, a digital terrain model and a digital bottom model were made. Points measured with a GNSS receiver on land and GNSS working with a singlebeam echosounder were then dropped onto the surface. In this way, the height coordinates of the same points were obtained using different measurement technologies. Thus, it became possible to compare the values obtained (see Table 1).

1. Comparison of values between singlebeam echosounder and bathymetric laser scanning: number of samples 1,811

mean ΔH 0.059 [m] standard deviation ΔH 0.048 [m]

2. Comparison of values between GNSS RTK on land and bathymetric laser scanning: number of samples 127

mean ΔH 0.072 [m] standard deviation ΔH 0.063 [m]

3. Comparison of values between GNSS RTK in water and bathymetric laser scanning: number of samples 128 mean ΔH 0.064 [m] standard deviation ΔH 0.022 [m]

Table 1: Comparison of the values obtained using different measurement technologies.

Conclusions

It is particularly challenging to take bathymetric measurements in shallow water because the use of traditional surveying techniques poses many problems. The shallow depth essentially eliminates the point of using a multibeam echosounder. The scanning strip of the echosounder would be small, and the echosounder itself could be damaged by entering a shallow area, or by an object lying on the bottom. Therefore, the only alternative remains to

Further reading

Average annual Secchi depths: https://rb.gy/6pbf7z

Figure 4: The wreck of the concrete ship Karl Finsterwalder, captured in the ALB data. (Image courtesy: GISPRO SA)

make profiles using a singlebeam echosounder and a GNSS RTK receiver. Unfortunately, this method is fraught with very poor performance, and its results are limited only to depth information within the profile. In the test area, it was recorded that the time required for landwater profiles was 40 human-hours. In contrast, the flight time using an airborne bathymetric scanner was about 1 human-hour.

One major drawback of ALB technology is the limitations associated with its use. Both suitable atmospheric and environmental conditions are necessary. Cloud cover, wave action, wind direction and strength, and water clarity are important. It is crucial to conduct recording at the moment of least contamination with organic and inorganic matter above the bottom. Regardless of the above, bathymetric laser scanning was demonstrated to obtain the most complete image of the shallow-water area in a relatively short time. This made it possible to create a reliable model of the bottom containing complete information. As a result, it is easier to not only identify objects lying in the water, but also track changes resulting from the activity of the sea.

The positive results obtained from the pilot project undoubtedly contributed to the tenders for monitoring the entire Baltic coast with airborne Lidar bathymetry. As a result, GISPRO SA has already had the opportunity to map the entire coastline for both the Maritime Office in Szczecin (in 2023) and the Maritime Office in Gdynia (in 2024). During the work carried out, in addition to satisfactory mapping of coastal geometry and the seabed, it was even possible to scan some wrecks. One example is the wreck of the concrete ship Karl Finsterwalder off the coast of Wolin Island. The continuation of the work related to the monitoring of the Polish coast in the coming years may bring many interesting conclusions and significantly improve the quality of protection of the Baltic coastline.

Acknowledgements

These works were carried out using equipment and knowledge obtained during the ‘Research and development works on the creation of a complete, multimodal mapping system for the needs of inland and sea waterways and exploitation areas’ project, number POIR.01.01.01-00-1372/19, funded by The National Centre for Research and Development, Poland.

From proof of concept to embedding in the NSDI

3DBAG: automatically generated 3D models of 10 million buildings

For a long time, it had been an outstanding research problem, but 3DBAG has turned the large-scale automatic reconstruction of detailed 3D building models into reality. Developed within the 3D Geoinformation research group at TU Delft, 3DBAG is an open and freely accessible data ecosystem – not just a geospatial dataset, but also a web viewer and dissemination platform offering various data formats and web services to maximize accessibility and usability – containing periodically updated models of the buildings in the whole of the Netherlands (approximately 10 million in total) at multiple levels of detail. As a valuable cornerstone for urban planning and engineering, 3DBAG is already widely used by government organizations, businesses and academia for multiple applications. Therefore, it is also being explored how to embed the 3DBAG in the national spatial data infrastructure (NSDI).

The development of the ‘Detailed 3D Building models Automatically Generated for very large areas’ (3DBAG) stemmed from a research question within the 3D Geoinformation research group at the Delft University of Technology (TU Delft): Was it even possible to generate three-dimensional building models at a high level of detail (LoD) – i.e. at LoD2.2, including roof shapes –for the whole of the Netherlands, in a fully automatic way?

This question was closely aligned with the research group’s aim: to stimulate the use of 3D geoinformation in practice. “By offering large-scale, low-cost, highly detailed and up-to-date three-dimensional building models we can significantly reduce the need for manual modelling and interpretation of spatial data. This helps such data to become mainstream so that it can be used in simulations and applications to analyse the environment, maintain urban areas and improve the efficiency of urban processes,” states Professor Dr Jantien Stoter, chair of the 3D Geoinformation Research Group, head of the Section Urban Data Science at TU Delft, and also a researcher at Kadaster and working for Geonovum. “Automatically reconstructing LoD2 models for large areas

has long been an outstanding research problem, but the availability of high-quality elevation data for the whole country, and the technical know-how in our research group, put us in a unique position to tackle this problem.”

Quality issues with available datasets

Around 2018, the research group started out by building a national dataset of simple block models. “Although simple 3D block models of buildings in the Netherlands were not difficult to generate, and several datasets were already available at that time, the country lacked a good standard source for them. And standardization is important, because one block model is not the other. For example, what does ‘height’ represent –the highest point, or the average height?” says Dr Ravi Peters, who worked alongside Balázs Dukai as one of the lead developers on the research project.

“Moreover, it was not always clear how the available datasets had been generated. This raised quality issues; it’s not ideal to be using data of which the quality or reconstruction decisions are unknown, especially if it’s for an important government project,” continues Peters. “So we started out by focusing on

the automatic generation of simple (LoD1) block models with several well-explained representative heights, from which users could choose the best option for their application.”

Increasing level of detail

Since that first version, the dataset itself, the associated website viewer and the underlying software have all continuously been developed into the version that is available today. “In parallel to the first basic version of the 3DBAG, we started working on a project with noise experts from the National Institute for Public Health and the Environment (RIVM). The aim was to explore whether we could automatically generate detailed building models as required for noise propagation modelling in the Netherlands,” recalls Stoter.

Peters adds: “Several ideas were investigated, and eventually we developed a method that could automatically generate LoD1.3 building models. These were still block models, but now each building could be split into several parts, each with their own height level; think of a church with a tower, or a house with a shed attached. We then discovered that, conceptually, our method made it a relatively

Figure 1: 3DBAG is an open and freely accessible data ecosystem containing periodically updated models of all the buildings in the Netherlands at multiple levels of detail. (Image courtesy: Ravi Peters)

small step from LoD1.3 to the even more detailed LoD2.2. This work on the 3D Noise dataset eventually led to a completely new version of 3DBAG, with LoD1.3 and LoD2.2 models.”

“This was the first time that such highly detailed LoD2.2 building models were made available on such a large scale. Datasets with detailed building models were available at that time – around 2020 –but on a much smaller scale; a single city was pretty much the extent of it. Not to mention that building those datasets involved a lot of manual work, so it was both time-consuming and cost-intensive,” he continues. According to him, the process of releasing the LoD2.2

Figure 2: After initially focusing on the automatic generation of simple (LoD1) block models, the team subsequently developed automatically generated LoD1.3 block models. They then discovered that, conceptually, it was a relatively small step from LoD1.3 to LoD2.2. (Image courtesy: Balázs Dukai)

version of the 3DBAG was particularly labour-intensive, involving a team of four people working for several months in the run-up to the first release of 3DBAG with LoD2.2 in March 2021.

Data sources

For automated reconstruction, 3DBAG needs high-quality classified Lidar data at a density of at least 8pts/m2, along with good data consistency and matching building polygons. The two main sources are the Dutch national register of addresses and buildings (Basisregistratie Adressen en Gebouwen/BAG), which includes building polygons, and the height data from the official Current Dutch Elevation map (Actueel Hoogtebestand Nederland/AHN) that is based on aerial Lidar point-cloud data. “The Lidar data is currently only updated once every three to five years, so we are also experimenting with photogrammetric point clouds generated by dense image matching. The results are not yet as good as Lidar-based models. Therefore, we are further investigating how to improve these reconstruction results, such as by extracting the rooflines from the generated true orthophotos instead of from the generated point clouds, and combining them with the point clouds generated from the images in the 3D reconstruction process,” states Stoter.

“It’s important to recognize that 3DBAG in its current form would not have been possible without the high-quality geoinformation that is collected and maintained for the whole country by the Dutch government and made available as open data. I believe that the Netherlands is unique in providing such easy access to so many geospatial datasets without any restrictions, and it was tremendously useful during the development of our automatic building reconstruction algorithm. The fact that the data sources

have an open licence also allows us to offer the 3DBAG as open data, ensuring that our solution is freely available and easy to use,” she continues.

Aimed at practitioners

Right from the start, the developers have aimed to tailor 3DBAG to users’ needs.

“Besides providing different levels of detail, we’ve added attributes that appeared to be useful for users, such as building volume, party-wall areas, and roof-, wall and floor areas. We developed these energy-relevant attributes for buildings in a project together with the Netherlands Enterprise Agency (RVO),” comments Stoter. One of the added attributes relates to the data quality. “Even though we work with high-quality source data and strive to generate valid models, around 0.5% of the buildings in 3DBAG still have issues – for example, due to problems with the input data, which is out of our control. Therefore, we run our 3D models through val3dity, a tool also developed in the 3D Geoinformation research group for the validation of 3D primitives according to the international standard ISO19107, so that users don’t need to validate the 3D data themselves. We then include the validation outcome as an attribute. That way, users are able to see at a glance whether they can use the data about that particular building with confidence or not,” Peters adds.

The team also realized it was important to make the data easily accessible, to

Examples of how 3DBAG data is being used in practice

3DBAG data has been integrated by government organizations, businesses and academia as input and a building block for all kinds of real-life applications. Some of the known applications include:

Estimating energy demand and consumption in buildings

Calculating renovation and retrofitting costs, and checking insurance or subsidy claims

• Finding suitable roofs for solar panels

• Simulating the wind flow and pollutant dispersion in urban areas based on computational fluid dynamics (CFD)

• Calculating heat stress and noise pollution (e.g. 3D Noise) in urban areas

• Identifying development potential for rooftops

• Predicting and assessing the impact on buildings in case of subsidence or earthquakes

• Analysing the urban structure and evaluating new housing developments

• Digital twins and national 3D maps, e.g. 3dkaartvannederland.nl, netherlands3d.eu and 3dtilesnederland.nl

For more examples, see docs.3dbag.nl/en/overview/media

encourage practitioners to try using it for their applications, according to Peters: “We developed a 3D web viewer largely from scratch. We also make the data available in formats that people are comfortable with, such as GeoPackage, Wavefront OBJ, WMS and WFS. There is now also an online API that serves CityJSON, which is our primary distribution format.” CityJSON is an OGC community standard that was also developed within the 3D Geoinformation research group. “With 3DBAG, we became heavy users of it ourselves, which now helps us to make further improvements to CityJSON,” he explains.

Dealing with exceptions

Needless to say, the development project was not without other challenges. “It definitely wasn’t easy to scale up our pipeline to work for the whole of the Netherlands. When you’re processing 10 million buildings in total, you are guaranteed to find some exceptional cases that break your algorithms,” states Peters. For example, the team quickly discovered that the large number of commercial greenhouses in the Netherlands created the need for extra attention and modifications to the building reconstruction algorithm, he recalls: “Greenhouses have a very large footprint and a simple shape, but

Figure 3: The large number of commercial greenhouses in the Netherlands created the need for extra attention and modifications to the building reconstruction algorithm. Top: AHN3 ground and building class. Middle: Heightfield. Bottom: Reconstruction result. (Image courtesy: Ravi Peters)

it is difficult to capture the height consistently with Lidar because the laser often penetrates their glass roofs, resulting in large areas without points on the roof in the AHN. This not only led to poor reconstruction results but also turned out to slow the algorithm down significantly. To avoid these issues, we fall back to a simplified 3D model for such greenhouses in order to generate usable models.” In the case of other exceptions, such as underground structures that are included in the building polygons or one building that is ‘floating’ above another, the team ensured that these cases are handled during the reconstruction process (see Figures 3 and 4).

Updates to source data

Another challenge related to the impact of updates to the source data. “3DBAG was initially based on AHN3. When AHN4 was released, we thought it made sense to use that dataset because it would provide the most up-to-date information,” says Peters. “However, we discovered that the new version of AHN had some different properties, and they were not always better for modelling buildings. For a small yet significant number of buildings, AHN4 turned out to have poor data consistency.” This resulted in data gaps which had a negative impact on the reconstruction result (see Figure 6). The team subsequently spent a few months investigating how to get the best out of both AHN versions including in projects with the City of Amsterdam and RIVM. “One idea was to use deep learning to improve the AHN4 point cloud, and another was to use point cloud fusion. Eventually, it was decided to adapt the algorithm to automatically check for data gaps in the new dataset and fall back on the older version if an issue is detected, on the condition that the building had not been changed since the older version. This was considered the most maintainable and efficient solution in terms of user relevance and feasible reconstruction times,” continues Peters.

User relevance has always been a strong factor in the 3DBAG project, and the interplay between the data experts in the research group and practitioners in the field has been key to understanding and aligning with their needs. “For example, when developing the 3D noise data, our focus was initially on LoD2. However, after starting to work with noise professionals, we realized that their simulation software wasn’t actually suitable to handle the highest level of detail; a lower level of detail, namely LoD1.3 containing noise-specific details, was the right solution in this case. That’s the kind of thing that you only learn through intensive collaboration with domain experts,” explains Stoter.

Spinoff to support accessibility

After their contracts as researchers ended, the two lead developers of 3DBAG founded the company 3DGI as a spinoff. This enabled them to fully devote their time to ensuring the accessibility of 3DBAG beyond the scope of academic research projects. This includes offering consultancy and software development for government organizations and commercial companies who need a little extra help with utilizing or adapting the data from 3DBAG.

Figure 4: Exceptions, such as underground structures that are included in the building polygons are handled during the reconstruction process. Top: AHN3 ground and building class. Middle: BAG footprint. Bottom: Reconstruction result with ground part removed from output. (Image courtesy: Ravi Peters)

“We still collaborate closely with the 3D Geoinformation research group on 3DBAG,” says Peters. “Having done most of the development work, we are happy to share our ideas for new functionalities or improvements, and we enjoy giving something back to the community.”

Stoter confirms this: “3DBAG is still actively used, investigated, and further developed in research projects within our research group, driven by users’ and organizations’ needs. For example, we have explored adding new features to the reconstruction models, such as textures, doors and windows. Many of our MSc students also investigate innovations

for the 3DBAG. If those results are good, they are included in the next version of the 3DBAG, such as the estimation of the number of floors inside each building based on machine learning.”

Open data

Besides taking account of practitioners’ needs, another reason for the wide uptake of the data is the availability of 3DBAG as open data. “For this data to truly make a difference in practice, such as by contributing to the energy transition, the climate adaptation strategy or accelerating the planning, design and construction process of new housing, the availability of 3DBAG as open data is crucial,” she continues. “This encourages

and accelerates adoption of the use of 3D data in practice. Open 3D data covering a whole country stimulates further innovations, including in new domains. And that is where the strength lies: in providing reliable, fit-forpurpose, automatically generated data on the whole of the Netherlands, accessible without any restrictions, for anyone who wants it.”

“3DBAG ensures that essential information is available to all, driving real-world impact without the sole reliance on a specific commercial solution. The fact that it is already being used for so many different applications (see box on page 26, Ed.) and we’re receiving so many positive reactions shows that there is demand for this. At the same time, we are happy to see companies embedding 3DBAG in their enriched data products, leveraging the use of 3DBAG’s free data even further,” says Stoter.

Platform for future innovation

However, free data does not mean that there are no costs associated with 3DBAG. With this in mind, one of the focus areas is now also on ensuring the necessary funding to safeguard the continuity of 3DBAG. “Initially, it was funded by research projects that required proof of scientific concepts, such as a significant grant awarded by the European Research Council on urban modelling in higher dimensions. But 3DBAG has now grown beyond this status into a frequently used data source for practitioners. A more structural funding is required to keep 3DBAG available as a solid, reliable data source and to support its maintenance and further development, independent from individual research projects. As part of this, we are also working to achieve the level of robustness that is required for long-term use and further

Figure 6: Figure 6: For a small yet significant number of buildings, AHN4 was not better for modelling buildings due to data gaps in the roof structure, possibly caused by occlusion effects during acquisition. (Image courtesy: Ravi Peters)
Figure 5: 3DBAG data has been integrated by government organizations, businesses and academia as input and a building block for all kinds of real-life applications. (Image courtesy: Ravi Peters)

development and maintenance,” comments Peters. These software improvements carried out in a collaboration between the 3D Geoinformation research group and 3DGI are financed by Kadaster via the ‘Working on Implementation’ (WaU) government funding programme. “This one-time investment will take the software from a research software to a status that it can be maintained and reused. Besides this, we are in the process of setting up the 3DBAG Innovation Platform to ensure the maintenance and further innovation of 3DBAG. The platform is coordinated by Geonovum, and is an initiative with six other partners: RVO, RIVM, the City of Amsterdam, the City of Utrecht, Kadaster, and Waterschapshuis. This collaboration has provided some seed funding for setting up the platform. Explorations are still ongoing for more structural funding within a national context,” comments Stoter. “One ambition is to include the stable versions of 3DBAG in the 3D data products of Kadaster and to give it a clear role in the Zicht op Nederland Data Fundament (data foundation for the

About the author

Lynn Radford is contributing editor and copy-editor at GIM International. After working in international sales and marketing for around 15 years, including at publisher Reed Business, she became a freelance translator, editor and copywriter in 2009. She now writes for print and online media in various sectors, including technology, supply chain, the food industry and horticulture.

Netherlands). 3DBAG is automatically generated and therefore different from other datasets considered as base data. However, being a solid and reliable dataset, used by many organizations, it is useful to further investigate how such datasets can become part of the NSDI,” she states.

Meanwhile, Peters is continuing to work on the software. “This solution was developed with the Dutch source data in mind, which meant that some of the underlying data properties became assumptions for our software to work with. We are slowly eliminating those assumptions to allow the

underlying 3DBAG data pipeline to be used by other developers internationally, including the possibility of using other data sources. Several public and private organizations from abroad have shown interest in the 3DBAG software, and some are testing it with their own data. We hope that our work will encourage and inspire further innovation for many more applications around the world in the future,” he concludes.

Questions to... 5

Moritz Lauwiner

The geospatial industry and the land surveying profession are in the midst of significant technological transformations. We asked Moritz Lauwiner, executive vice president and president, surveying solutions/ geomatics, at Hexagon’s Geosystems division about the trends and innovations shaping the geospatial future. Here, he provides his perspectives on technologies including mobile mapping, autonomous surveying systems and artificial intelligence (AI).

In your opinion, which technological advancements currently have the most profound impact on our sector?

I am excited about every innovation that makes surveyors’ jobs safer, easier and more efficient. Take modern mobile mapping systems, for example. They now record data autonomously, process it automatically and create an accurate estimate in real time – all while the surveyor is kept off the roadside. I am even more excited if surveyors –and others – use these technological advances to overhaul their workflows and innovate their services based on a deep understanding of their clients’ needs. I believe that’s the magic triangle that drives transformation in the industry: people, processes and technology. We also see combinations of technologies drive progress, e.g. imaging in GNSS, tilt in GNSS and for total station poles, and, of course, autonomy. The increasing impact of autonomous solutions means that the day-to-day role of the surveyor is becoming more data-focused. Surveying businesses that more readily adapt to this change are the ones that excel and grow their business. Importantly, surveying businesses that adopt the newest technologies are often the ones that find it easiest to recruit and retain talent, because people like using the best tools.

Which AI-driven innovations are poised to have the most significant impact on the field of surveying?

Creative AI applications are streamlining complex tasks like processing data from ground-penetrating radar (GPR) with IDS GeoRadar’s AiMaps. Interpretation of such data used to require highly specialized experts. Now, automated analysis and visualization enable a broader range of professionals to understand and utilize the information. The Leica Pegasus TRK includes an AI-enhanced camera system that provides automatic camera calibration, which saves setup time, reduces the skill level required for operation, and enables real-time blurring of people and vehicles to ensure privacy. Thanks to artificial intelligence, vast amounts of geospatial data can be analysed with speed and precision unattainable through other methods. Each AI advancement opens up new insights and applications across various industries. These advancements continue to enhance the accuracy and efficiency of geospatial data processing and pave the way for realtime data analysis.

As the geospatial industry shifts toward autonomy, what challenges does Hexagon face in developing autonomous surveying systems, and how are you addressing them?

One of the biggest challenges is data integration. Autonomous systems – like handheld scanners – need to integrate various data sources, including Lidar, GNSS and photogrammetry. That’s why we constantly develop advanced algorithms and invest in AI to enable real-time data integration. Another challenge is ensuring that autonomous surveying tools work seamlessly in all environments. So, we’ve had to look at innovative ways to develop traditional surveying equipment to reduce the need for manual adjustments. For example, taking the survey pole and allowing for tilt and height compensation has enabled more efficient and error-free data collection. Ultimately, many of the challenges associated with developing autonomous solutions require more than one company to solve them. That’s why we partner with other industry titans. One example is our Leica BLK2GO PULSE,

The autonomously flying laser scanner Leica BLK2FLY makes scanning exteriors easier for Danish company R3DA.

which uses Sony’s advanced time-of-flight (ToF) image sensors and Leica Geosystems’ GrandSLAM technology. This gives users colourized 3D point clouds on their smartphone screen in real time as they walk around with the scanner.

In an era where real-time, highprecision geospatial data is crucial, especially in dynamic settings such as smart cities, what role does Hexagon foresee for mobile mapping in shaping this future?

Mobile mapping technologies are essential for developing smart cities, offering efficient and cost-effective geospatial data collection for above- and below-ground infrastructure. Modern mobile mapping sensors now offer accuracy comparable to traditional surveying tools, eliminating the need for trade-offs. Mobile mapping is also non-intrusive, capturing data from a distance while safeguarding residents’ privacy, with AI instantly recognizing and blurring people and vehicles. AI can also transform post-processing by automatically categorizing the point cloud into distinct classes, such as roads, buildings, vegetation

Moritz Lauwiner is executive vice president and president, surveying solutions/geomatics, at Hexagon’s Geosystems division. In this role, he oversees research and development, business strategy, and overall management of the market-leading offering of total stations, GNSS,

and more. These advancements make smart city projects more accessible to smaller surveying businesses, delivering quick return on investment. One UK-based geospatial company, Severn Partnership, told us that investing in advanced mobile mapping technologies elevated its market standing, diversified its clientele and expanded project opportunities beyond traditional surveying.

How does Hexagon envision the surveyor’s role evolving with the increased automation and digitization in the geospatial industry?

As automation and digitization advance, the surveyor’s role is changing to that

monitoring solutions and related surveying software. Lauwiner joined the company in 1994.

of a data manager. For example, mobile mapping requires not only a deep understanding of above-ground, groundlevel and underground mapping, but also the ability to integrate data from various sources. In my experience, the surveying businesses that thrive are the ones that constantly evolve. These businesses unlock new opportunities by diversifying their services, streamlining workflows and embracing emerging technologies. This opens up more contract opportunities and provides surveyors with a broader range of methods to tackle each one, ultimately making their work safer and more efficient. One example is R3DA, a Danish 3D architecture and BIM specialist. To scan the exterior of buildings, they used to attach a terrestrial laser scanner to a pole and hang it out the window. It did the job, but it wasn’t the fastest option. After recently investing in the autonomous flying Lidar scanner Leica BLK2FLY, they can now complete two jobs in one day. I’ll be the first to praise the traditional skills of any surveyor using tried-and-tested equipment, and I respect the resourcefulness of generations of geospatial specialists using existing technology to solve challenging tasks. But today, given the increasing demand for geospatial services and the talent shortage, everyone needs to work smarter, not harder. Embracing new technology, continuously developing your skillset and diversifying your services are essential in today’s fast-evolving market.

The Leica BLK2GO PULSE is a first-person handheld laser scanner that gives users colourized 3D point clouds on their smartphone screens as they walk.

Questions to... 5

Christiane Salbach

As the geospatial industry undergoes rapid transformation, this year’s Intergeo, taking place from 24-26 September in Stuttgart, is set to spotlight the most critical trends driving the field forward. Christiane Salbach, managing director of DVW GmbH, offers her perspective on how Intergeo is meeting the evolving needs of the industry, encouraging impactful collaborations and equipping the next generation of professionals to tackle future challenges.

Which key geospatial trends and technological innovations are expected to take centre stage at this year’s event?

This year’s Intergeo will highlight several groundbreaking trends. Artificial intelligence (AI) is revolutionizing geospatial data analysis by identifying patterns and anomalies faster than ever before. Cloud solutions provide flexible access to extensive databases and computing power, simplifying the processing of large datasets. Additionally, enhanced graphics processors are advancing the visualization and modelling of complex geospatial data. These technologies are essential for tackling global challenges such as the climate crisis, sustainable planning and construction, and urbanization. Particularly exciting is the integration of new sensors and technologies, which enable even more detailed environmental data collection.

To what extent is Intergeo contributing to addressing the planet’s most pressing challenges?

Intergeo traditionally positions itself as a global platform for all geospatial applications. The collection, analysis and visualization of Earth observation data are increasingly crucial for monitoring climate change and securing resources. By bringing together experts from various disciplines and regions, the event promotes knowledge exchange and the development

Christiane Salbach serves as managing director of DVW GmbH. In this role, she is responsible for the organization and management of Intergeo with a focus on the implementation of the congress. Previously, she was responsible for the organization and management of

of innovative solutions. Workshops, panel discussions and specialized forums enable participants to initiate groundbreaking practical geospatial collaborations and build networks that extend beyond the event. The focus this year is on projects that advance the use of geospatial data for disaster preparedness and environmental protection. Additionally, knowledge transfer within Germany’s federal surveying administrations empowers experts to develop sustainable solutions and forge forward-looking partnerships.

What are you doing to ensure that Intergeo best supports the evolving needs of the industry?

The Intergeo agenda is continuously adapted to the evolving needs of the geospatial industry by focusing on current topics and technologies. This includes giving significant attention to new trends like AI, cloud computing and the latest visualization technologies, along with their applications, both in the conference programme and on the Expo stages. Participant and exhibitor feedback is systematically evaluated to continually improve the offerings. The event also employs flexible formats, allowing attendees to tailor their learning and networking experiences. Through this dynamic approach, Intergeo remains a relevant hub for the industry.

In what way will the event address the educational needs of the next generation of geospatial professionals?

At Intergeo, we place strong emphasis on fostering the next generation

the association’s general activities, including in coordination with the 13 German national associations.

of geospatial professionals. New technologies and their applications are showcased to prepare young professionals for future challenges. The integration of educational partners and universities into the event also supports the exchange between academia and industry. Through targeted initiatives like the Intergeo School Day, when groups are taken on guided tours of the event, we provide students with easy access to the world of geoinformation. Young talents receive age-appropriate demonstrations and can gain their first hands-on experiences through interactive demos.

How do you envisage Intergeo having an even bigger impact in the future?

A key wish for the future of Intergeo is to contribute to greater recognition of geospatial data in politics and society. Intergeo focuses on the collection and diverse applications of geospatial data, which is crucial for property security, internal security and protecting critical infrastructure. Beyond these areas, Earth observation data, especially when combined with AI, can be a game-changer for agriculture, global food security and addressing climate-induced changes in urban and rural areas. Geospatial data forms the foundation for understanding and efficiently managing our environment and resources. With stronger collaboration between politics, business and society, Intergeo could play an even greater role in developing and implementing innovative solutions to the most pressing challenges of our time.

Advancing sustainable geospatial artificial intelligence

Launch of the GeoAI Research Center

Ghent University in Belgium has launched an international research centre with the aim to address geospatial problems by improving spatial reasoning and analysing capabilities of artificial intelligence (AI) models. The GeoAI Research Center – part of the Faculty of Sciences, Department of Geography – is an interdisciplinary community with a sharp eye on balancing the benefits with the need for ethical considerations and security measures.

In a nutshell, the GeoAI Research Center in Ghent is about enabling machines to process and reason about geospatial data with capabilities that surpass human limitations. Nico Van de Weghe, a Professor in GIScience & GeoAI and director of the centre, explains: “AI still lacks the intuitive and contextual understanding inherent to human reasoning. For example, although computers currently perform many tasks faster than humans, they do not always do so as efficiently or intelligently as the human brain, which uses minimal energy for tasks like object recognition. The task ahead involves leveraging knowledge-based and data-driven AI approaches to tackle geospatial problems, allowing humans to focus on informed decision-making.”

Hybrid methodological approach

The GeoAI Research Center differentiates itself through several aspects. First and foremost, unlike many existing geoAI initiatives that focus solely on either knowledge-based or data-driven approaches, the centre emphasizes the development and application of hybrid models. Data-driven AI relies on analysing large volumes of data to identify patterns and make predictions using machine learning and statistical models. For example, it uses deep learning to classify satellite images based on pixel data. In contrast, knowledge-based AI uses structured information, rules and logical reasoning derived from domain knowledge, employing ontologies, knowledge graphs and expert systems. An example of this is applying geographical rules to infer land use changes based on predefined criteria. “We chose a hybrid approach because combining

An example of a research project pipeline at the GeoAI Research Center. In this case, ‘Depth Estimation’, ‘3D Object Models Extraction and Evaluation’ and ‘Interaction in VR Environments’ are involved.

the strengths of both achieves more robust, accurate and contextually aware insights,” says Tim Van de Voorde, co-director and a Professor in Remote Sensing. For example, a geoAI system that uses machine learning to analyse spatial patterns from satellite images (data-driven) and applies predefined rules to classify land use based on those patterns (knowledge-based), would be considered a hybrid AI system. ”The hybrid approach not only enhances the system’s accuracy, but also its explainability and reliability in decision-making. The integration improves model performance and transferability,

and also training efficiency by reducing the required training data amount,” he continues.

Scope for cooperation

The centre’s scope for international cooperation is very broad. “A diverse collaboration is essential to advance geoAI technologies and their applications effectively,” emphasizes Haosheng Huang, who is also co-director of the GeoAI Research Center, in addition to being a Professor in GIScience & Cartography as well as vicepresident of the International Cartographic

GeoAI analysis can also support the sports world, improving both individual skills and team coordination. Above B, the representation of the subsequent three moments in a tennis rally leads to a prediction (with 62% certainty) of the next interaction on the tennis court.

Association (ICA). “At the international level, we plan to bring together researchers, industries, NGOs and governmental agencies. We have also garnered interest from several companies. By working with industry leaders, we aim to address real-world challenges, develop practical solutions, and promote the widespread adoption of geoAI.”

The collaboration scope for the private sector extends from drone and satellite builders, cloud processing specialists and GIS developers, to big AI companies. Professor Huang adds: “We’re exploring collaborations with other sectors, such as environmental organizations, urban development agencies and the agricultural sector. These partnerships will help tailor geoAI research and applications to meet the diverse needs of stakeholders, ensuring broader societal benefits.” Recognizing the skills gap in geoAI, the GeoAI Research Center will offer specialized training programmes and workshops for people with different skill levels and professional backgrounds, including geospatial professionals, data scientists, policymakers and students.

Making tasks easier

By 2027, the group leaders expect the integration of artificial intelligence into the geospatial sector to have significantly enhanced efficiency and effectiveness across various functionalities and tasks. This will allow humans to focus on defining problems, interpreting results and making actionable decisions. The projection for automated land surveying is that AI-driven drones and autonomous vehicles equipped with advanced sensors will automate data collection, while AI algorithms will process and analyse this data in real time. ‘Natural Language GIS (NLG) assistants’ are another promising development. These assistants will allow users to query

GIS data and perform complex geospatial analysis using natural language. This advancement is expected to spark competition among both existing and new kinds of GIS players – with the new ones focusing exclusively on natural language geospatial interfaces –making GIS more accessible and user-friendly.

In the field of environmental monitoring and automated feature extraction, the 2027 projection is that AI algorithms will do all the work and will not only speed up the mapping process, but will also improve the accuracy and detail of geospatial databases. Consequently, quicker responses to environmental threats will be possible. The same goes for tasks such as predictive urban planning, where AI models use vast amounts of data to forecast urban growth, infrastructure needs and environmental impacts. Planners will be able to simulate various scenarios, making urban planning more proactive rather than reactive.

The integration of AI with location-based technology will include the use of computer vision for object and location identification, leading to commercial competitive advantages. In fact, real-time geospatial intelligence for business strategy will become the new normal, as the proliferation of Internet of Things (IoT) devices and the use of generative AI enables organizations to analyse vast quantities of geospatial data generated from smart devices in real time. The adoption of cloud technology will facilitate the delivery of GIS services on a pay-as-you-go basis. This will lower costs for companies who want to differentiate their products with location services. Transformation due to hybrid geoAI will occur in the sports world too. AI-supported analysis of video data will provide easy insights into team dynamics, improving both individual skills and team coordination. And of course, augmented reality (AR) technology will grow in combining real-world geographic data with computergenerated overlays, enhancing spatial understanding and decisionmaking and allowing users to interact with their surroundings in innovative ways.

Potential risks

While offering significant advancements, open geoAI also presents potential risks and avenues for misuse. That underscores the need for careful consideration of ethical guidelines, robust security measures and regulatory frameworks. A specific threat is the speed and accuracy with which geospatial data can be analysed to identify vulnerabilities facilitating sabotage or terrorist attacks in critical infrastructure such as power grids or transportation networks. Cyber attackers could also use geospatial analysis to coordinate

the author

Frédérique Coumans is senior editor for GIM International. For more than 25 years, she has been covering all aspects of spatial data infrastructures as editor-in-chief of various magazines on GIS, data mining and the use of GIS in business. She lives near Brussels, Belgium.

About

distributed denial-of-service (DDoS) attacks effectively. Additionally, there are concerns about misuse for aggressive military actions or unauthorized surveillance. Professor Huang warns: “Even during periods without active conflicts, geoAI’s ability to generate realistic geospatial data like images or maps raises concerns about potential misuse for spreading disinformation. This could involve creating fake satellite images to support false claims about properties, land use, disasters, migrants’ movements or other significant events, potentially leading to public panic or influencing political outcomes. Incorrect or biased analysis of geospatial data could support inappropriate land development, mismanagement of natural resources and discriminatory practices.”

Privacy violations

Besides all the above, another significant concern is the risk of privacy violations and surveillance overkill with the widespread adoption of geoAI in commercial and public applications. Open GeoAI can be used to track individuals’ movements and activities without their consent, leading to serious privacy breaches. This could involve analysing location data from smartphones or social media to infer sensitive information about individuals, such as their routines and relations. Selling detailed spatial profiles to third parties to be combined with other person-related data increases the dangers.

And of course, excessive government use of geoAI for surveillance and detection fuels concerns about violating civil liberties.

Prof Van de Weghe comments: “There should be clear distinctions between acceptable uses of geoAI for enhancing services, and invasive practices that infringe on privacy and civil liberties. At our research centre, we prioritize addressing these ethical concerns. We are committed to modelling the trade-offs between AI performance and geoprivacy, and developing best practices and case studies to ensure privacy law-compliant geoAI development. Regulations should mandate transparency in data collection, usage and sharing, with strict consent mechanisms. Developing AI techniques, such as anonymization and differential privacy, can help mitigate these risks. Public awareness and engagement are also essential to build trust and ensure the ethical use of geoAI. International cooperation might be necessary to manage cross-border challenges posed by geoAI, ensuring consistent privacy protections across jurisdictions.”

Challenges

One of the other key challenges for the research centre, besides ensuring funds and so on, is to make the vast amount of existing geospatial data ready for AI training and applications. Many of these

of high-quality datasets are proprietary or subject to restrictive copyrights, which raises significant legal challenges – and also ethical ones, since determining the ownership and copyright of AI-generated works is complex. Potential solutions might be found in Linked Open Data, according to Professor Huang: “Contributing to the development of Linked Open Data in the geospatial domain is essential, i.e. publishing and interlinking of structured data on the web using open standards and formats to foster a more collaborative ecosystem. Of course, robust data anonymization and encryption methods will be essential to protect user privacy.”

The GeoAI Research Center’s management realizes there are still many unknowns. One identified gap is the potential impact of climate change on geospatial data reliability. Professor Van de Voorde: “As climate patterns shift, the historical data used to train geoAI models may become less accurate, necessitating new approaches to model training and validation that can adapt to rapidly changing environmental conditions.” Another challenge is the integration of real-time data streams with existing geospatial datasets. As an example, he mentions dynamic urban traffic management. While the collection of realtime traffic data, pedestrian movements and public transport usage from IoT devices is already happening, the effective integration of this data with historical traffic patterns to optimize traffic flow in real time is a field for further research.

Nico Van de Weghe highlights an additional challenge: the importance of public engagement and policy influence in geoAI development. “Public engagement can make geoAI more accessible, foster informed discourse on its benefits and risks, and guide ethical research practices. We think that engaging with policymakers is essential to develop regulations that support innovation while protecting public interests and securing funding for research. The GeoAI Research Center will actively explore this path.”

Further reading

https://geoai.ugent.be

The leadership team of the GeoAI Research Center (from left to right): Lars De Sloover (researcher in geospatial AI), Director Nico Van de Weghe (Professor in GIScience & GeoAI), Co-director Tim Van de Voorde (Professor in Remote Sensing), Co-director Haosheng Huang (Professor in GIScience & Cartography) and Jana Verdoodt (researcher in geospatial AI).

UrbanARK: a community-oriented resilience project

Flood risk assessment via innovative geospatial data usage

Flood risk evaluation models require knowledge of the local conditions. Lidar and imagery data can be considered as complementary in providing topographic and radiometric measurements necessary for creating digital elevation models (DEMs) essential for identifying flood-prone areas and supporting hydraulic and hydrologic modelling of flood scenarios. Going beyond that classic use case, this article introduces several innovative uses of Lidar and imagery data to enhance the resilience and emergency preparedness of urban centres and their communities against flooding threats. Additionally, the article explores the integration of crowdsourced geospatial data, a rapidly growing resource that augments traditional geospatial sources collected by professionals.

Both Lidar and photogrammetry technologies can be used to produce digital terrain models (DTMs), the essential topographic input for floodplain delineation, overland flow simulation and prediction of flood extents, depths and velocities. Compared to imagery data, Lidar data is often superior in terms of spatial accuracy and level of detail. Detailed surface features such as street kerbs and drainage systems captured in a Lidar dataset can be useful for fine-scale urban flood modelling, since such features influence how water flows and accumulates. While not providing as much spatial accuracy and resolution as Lidar technologies do, photogrammetry technologies can capture more complete radiometric properties of the target environment. This is a significant advantage compared to Lidar, which is most often monochromatic (i.e. operating on a single wavelength). In the context of urban flood risk assessment, spectral information is valuable for identifying different types of land cover (e.g. impervious and permeable urban surfaces, urban vegetation and water bodies). Such information is useful in assessing surface runoff and infiltration rates.

In the context of flood risk assessment, imagery data (e.g. aerial and street-level imagery) is also useful to examine the extent of flooding and resulting damages. Lidar and imagery data are complementary to each other in providing valuable information for flood risk evaluation. Traditionally, Lidar and photogrammetry data acquisition is conducted systematically by trained personnel using survey-grade equipment to ensure data quality. Since professional surveys are expensive and timeconsuming, survey-grade data may not always be available and can be outdated. Some challenges associated with limited survey-grade data

Detection of subsurface structures from airborne Lidar point clouds; the points detected by the algorithm as subsurface structures are coloured in green.

Figure 1: Terraced houses with open basement wells in Dublin. (Image courtesy: Mapillary, licensed under Creative Commons Share Alike [CC BY-SA])
Figure 2:

can be overcome by exploiting alternative data sources, such as crowdsourced geographic data contributed voluntarily by non-professionals. While crowdsourced data is less reliable due to the limited control of data quality, it may be more widely available and less expensive than professionally acquired data. In many scenarios, crowdsourced data can supplement the limitations of authoritative data.

Project UrbanARK

In a recent US-Ireland research and development project named UrbanARK, researchers from University College Dublin (Ireland), New York University (USA) and Queen’s University Belfast (Northern Ireland) developed innovative, computationally efficient, geospatial analysis methods for mining information about urban subsurface spaces from airborne Lidar data and crowdsourced street view imagery data. The obtained information is intended to enhance knowledge of flood risks. Information about subsurface spaces – such as building basements, underground car parks and pedestrian underpasses – is crucial for accurately modelling flood scenarios at a fine scale. In addition to affecting flood prediction results, subsurface spaces pose a higher risk during floods, making timely evacuation more challenging. Despite their importance, information about subsurface spaces is scarce and/or hard to access. The overall goal of the UrbanARK project is to enhance flood risk management for urban coastal communities using geospatial science. Most of the analysis methods developed in the project were designed to run on parallel computing infrastructure to ensure they are ready for real-world, largescale applications.

Fusing Lidar and DTMs to detect open basements

Portions of Dublin, Belfast and Brooklyn (New York) form UrbanARK’s three study areas. Each location has a large number of terraced houses with open basement wells (Figure 1). These basements pose a higher flood risk to occupants and serve as unrecorded ‘sinks’ that may influence flooding but may not be captured in a standard flood model. However, the entryways to these basement wells are commonly larger than one square metre, making it possible for airborne Lidar signals to reach the bottom of the basement floor. At a sampling density of over 100 points/

About the authors

Dr Anh Vu Vo is currently a research fellow at the School of Computer Science at University College Dublin, Ireland, and formerly a research scientist at New York University’s Center for Urban Science & Progress in the USA. His expertise is in high-scalability and high-performance algorithms for managing Lidar and other urban spatial datasets.

Prof Michela Bertolotto is a professor at the School of Computer Science at University College Dublin, Ireland. Her research focuses on spatiotemporal data modelling, Earth observation data analytics, mobile and web-based GIS, map personalization and open spatial data.

Dr Nhien-An Le-Khac is an associate professor at the School of Computer Science at University College Dublin. His research focuses on cybersecurity, digital forensics, AI security, high-performance computing and secure healthcare IT systems.

Prof Debra Laefer is a professor of urban informatics, and director of citizen science at New York University’s Center for Urban Science & Progress and the Department of Civil & Urban Engineering. Her background spans geotechnical and structural engineering, art history and historic preservation. Her work focuses on technical solutions for protecting architecturally significant buildings from subsurface construction.

Dr Ulrich Ofterdinger is a reader at the School of Natural and Built Environment at Queen’s University Belfast, Northern Ireland. His research focuses on characterizing complex geohydrological and environmental systems to understand the governing processes and their impacts on the environment and human health.

m2 (the density level increasingly common in airborne Lidar mapping), each basement well may be captured by hundreds to thousands of sampling points. Those point samples returned from the basement wells can be reliably extracted from airborne Lidar point clouds by referencing the ground elevation present in existing DTMs available in most cities.

While the method is straightforward, the implementation of the method is non-trivial because of the high computational demand required. Comparing elevation values in a point cloud and their corresponding values in a terrain model is essentially a complex spatial join involving two potentially very large datasets. To address the computational

load required by the analysis, UrbanARK researchers developed a powerful algorithm which partitions and decouples the input datasets so that different data portions can be analysed in parallel by different computing nodes (i.e. autonomous computers). The strategy effectively makes use of the aggregate computing power of multiple computers to speed up the analysis. The computing strategy is often known as ‘data parallelization’, which is widely used for big data analytics.

Using 16 computing nodes, each of which has 8 CPU cores, UrbanARK researchers completed the fusion of 1.4 billion Lidar data points and a 0.5m-resolution DTM covering an area of 2.5km2 in Dublin city centre within

five minutes. That speed and the option to connect additional computers to share the workload make the approach suitable for analysis at the city scale and beyond. Figure 2 shows the results obtained from the algorithm. In addition to open basement wells, the algorithm detects lowered backyards and outdoor staircases to underground floors, among other types of structures lying below the ground level. While airborne Lidar data was used in the study, the algorithm can take any kind of Lidar datasets as input, such as data from terrestrial Lidar or GeoSLAM scanners.

Extracting indicators from imagery using deep learning

Another opportunity to detect the presence of subsurface structures, as recognized by the UrbanARK team, is from photos captured at the street level, i.e. street view imagery (SVI). Due to the favourable range and viewpoint, SVI often contains features such as basement skylights, ventilation windows and basement wells, which are valuable indicators of building basements (see Figure 3). Fortunately, there are many large-scale SVI databases that offer extensive visual documentation of the built environment in many cities and towns worldwide. The prominent SVI data providers include Google Street View, Mapillary, Apple Look Around, and KartaView. Images in those databases often come with rich metadata that can be used to geolocate the images and features of interest in the images. SVI data collected systematically by trained personnel and specialized cameras, such as Google Street View imagery, have high quality but come at a cost and restricted terms of use. Crowdsourced data sources (e.g. Mapillary imagery), on the other hand, are more heterogeneous in terms of data quality but are also more accessible and may have fewer usage restrictions.

Using SVI data from the Mapillary crowdsourced database, UrbanARK researchers trained a deep learning model to automatically identify basement indicators from images and videos taken from the street level. The selected model was You Only Look Once (YOLO), a popular artificial neural network architecture capable of performing object detection and image segmentation. The model was originally trained using over 330,000 images in the Common Objects in Context (COCO) database and was fine-tuned

on just 153 SVIs annotated by UrbanARK researchers. Through the original training, the model learned to recognize generic features from images such as corners, edges, textures and colour patterns. The fine-tuning process allowed the model to adapt to the specific task of detecting basement indicators. Figure 4 depicts the basement railings detected by the deep learning model trained by the UrbanARK team. While such a model is not completely accurate and is unlikely to capture every basement, the largely correct information it provides can be useful, particularly when combining the information with other datasets such as aerial images, Lidar data, ground penetrating radar data and building registries. Furthermore, SVI databases such as Mapillary are freely available, and valuable information can be mined from them to enhance knowledge of the urban environment.

Communicating flood risk information

In addition to using geospatial data for information extraction, the UrbanARK team employed Lidar and imagery data to develop a web-based tool for flood risk communication. By integrating multiple sources of Lidar and imagery data, they created a 3D textured point cloud representing the built environment for use as a basis of an immersive virtual reality (VR) environment. Terrestrial Lidar data, including GeoSLAM data, is preferred over airborne alternatives for this particular application where the data density is important to construct a photorealistic model. Imagery data can be sourced from devices such as DSLR cameras, mobile phone cameras or inexpensive cameras mounted on uncrewed aerial vehicles (UAVs or ‘drones’). The point clouds are transformed into an AR environment using open-source Potree and Three.js software libraries. Within this AR environment, the flood level, represented by a polygonal surface, can be adjusted to reflect different flood scenarios.

The UrbanARK tool involves the user wearing an inexpensive or easily constructed VR viewer compatible with a smartphone (see Figure 5). The smartphone renders the 3D environment, which is viewed through a set of lenses to create the 3D effect. To function, the smartphone must be equipped with standard sensors: an accelerometer, a gyroscope and a magnetometer.

detected as basement railing. Each polygon is labelled with a number that signifies the model’s confidence level in the detection. (Image courtesy: Mapillary, licensed under Creative Commons Share Alike [CC BY-SA])

Figure 3: Visual features indicating the potential presence of building basements. (Image courtesy: Mapillary, licensed under Creative Commons Share Alike [CC BY-SA])
Figure 4: Automatic detection of basement railings; the blue polygons indicate areas

Acknowledgements

Funding for the UrbanARK project was provided by the National Science Foundation as part of the project ‘UrbanARK: Assessment, Risk Management, Knowledge for Coastal Flood Risk Management in Urban Areas’ NSF Award 1826134, jointly funded with Science Foundation Ireland (SFI - 17/US/3450) and a research grant (USI 137) from the Department for the Economy Northern Ireland under the US-Ireland R&D Partnership Programme. For the purpose of open access, the author has applied a Creative Commons (CC BY) public copyright licence to any author accepted manuscript version arising from this submission.

These sensors track the user’s location and head orientation, simulating head movements within the 3D virtual environment. This system allows participants to explore the environment remotely on a laptop or personal computer, feeding back information for the results. Such a VR tool can enhance flood risk awareness and increase the perception of vulnerability to flood risk among users.

Conclusion

While Lidar and imagery data have provided – and will continue to provide – flood prediction models with essential terrain information, their applications should not stop at the classic use case. As demonstrated in the UrbanARK project, both Lidar and imagery data can be analysed to extract information valuable to enhance flood prediction models, such as information about subsurface spaces in the urban environment. In addition, such data can be used to provide a means to effectively communicate flood risk information.

Further reading www.urbanark-project.org

Open basement wells: https://artsandculture.google.com/story/ RQUx8l4S5n_gKA

Figure 5: A VR tool can enhance flood risk awareness and increase people’s perception of vulnerability to flood risk.

From maps to Gbps: leveraging reality capture technology for fibre-to-the-home

How geospatial data enables the rollout of fast internet

The digital economy is growing at an unprecedented rate, and global data consumption will increase exponentially over the next decade. Fibre-optic networks form the backbone of the digital infrastructure necessary to support the pace of growth, but the pressure is on to roll them out fast enough. This article explores the scale of the challenge, and shows how the geospatial industry is a crucial part of the solution. Thanks to mobile mapping, and particularly Lidar technology, everyone will be able to tap in to the enormous opportunities offered by high-speed fibre-optic internet.

A modern fibre-to-the-home (FTTH) or fibre-to-the-premises (FTTP) infrastructure is essential to support the transformation to a full-scale digitized economy. This broadband technology utilizes optical fibre to provide reliable, high-speed internet directly to individual buildings, including homes, individual apartments and businesses. FTTH received a huge boost during the COVID-19 pandemic, when millions of employees ended up working from home. This created a massive need for bandwidth and reliable fast internet, coming straight to their door. And although laser-based and quantum internet solutions are currently being developed, until they actually materialize FTTH is the fastest solution around.

Closing the connectivity gaps

According to the broadband statistics published by the Organisation for Economic Cooperation and Development (OECD), internet connectivity and speeds vary widely

Will 5G make fibre optics obsolete?

from country to country, and there still is a lot of work to do to close the gaps. In the Netherlands, telecom provider KPN offers consumers and small businesses upload and download speeds of up to 4 gigabytes per second (Gbit/s or Gbps), facilitated by a special modem. This speed is exceptionally fast by today’s standards, but it may not be necessary for everyone. For households or workplaces with a maximum of two people, a lower speed of 100 megabits per second (Mbit/s or Mbps) usually suffices. The European Commission has set ambitious targets for the European Union by 2030; as part of the Digital Decade agenda, it wishes to see a Gbps connection to every household and business (‘Gigabit for everyone’), and all populated areas should be covered by nextgeneration high-speed wireless networks (at least equivalent to 5G performance). Similarly, in the USA, the government is spearheading a programme designed to connect every citizen to affordable and

It is a common misconception that 5G (and in the future 6G) technology can completely replace wired connections. While the future of communications is indeed moving towards wireless solutions, the success of these technologies depends on a strong foundation of fixed infrastructure. 5G is set to be a transformative force, but without the support of fibre optics, its performance and reach will be severely limited. In particular, fibre optics will be needed for high-capacity backhaul to manage the immense amounts of data that 5G will generate. Additionally, there are significant economic and political reasons for supporting 5G with fibre-optic infrastructure.

reliable high-speed internet. Through the Broadband Equity Access and Deployment (BEAD) programme, US$65 billion has been allocated to support this initiative. This funding bolsters and enhances existing programmes aimed at expanding internet access and usage across the country. Meanwhile, China’s 13th Five-Year Plan (2016-2020) and the current ‘Broadband China’ strategy require operators to offer higher speeds of 100Mbps or more to home broadband subscribers in large and mediumsized cities. In addition, in rural areas, operators are expected to expand FTTH deployment by connecting 98% of villages to fibre-optic networks. This is a challenge of astounding scale. According to the Chinese Ministry of Industry and Information Technology, nearly five million kilometres of optical fibre cable were installed in the country last year alone, bringing the national total to 64.32 million kilometres.

Geoinformation-assisted building of FTTH networks

So how are FTTH networks built? The careful planning process includes selecting the appropriate architecture, designing efficient fibre distribution paths, choosing splitter locations and managing installation logistics. Subscriber density, geographic challenges, and future scalability are important factors. In the execution phase, using pre-terminated cabling and effective network management enhances reliability and cost efficiency.

It is in this context that survey companies are well positioned to tap into this potential market. After all, geospatial data runs like a common thread through an FTTH ecosystem. Although some background mapping data essential for overlaying network plans can be sourced from various platforms such as Google Maps, Microsoft Bing, OpenStreetMap and national mapping agencies, in its FTTH Handbook (currently version 9) the FTTH Council Europe actually makes a strong case against using these sources as they are often not up to date. Instead, it strongly recommends using an up-to-date mobile mapping survey as the most productive method to collect the necessary data. Therefore, at various stages of a fibre-optic network project, it is necessary to record and analyse other data from the environment, including ground cover information, cables, electricity poles, trees, traffic signs, road infrastructure and facades. In particular, knowing the surface type before laying FTTH cables in the ground can produce a significant cost advantage, since there is a big difference between opening up concrete/asphalt compared to sand/grass, for example.

Benefits of mobile mapping for FTTH

Given the need for time-saving solutions, it is very logical to use mobile mapping systems for this purpose, since the emergence of 360° mobile mapping cameras and advanced Lidar solutions has transformed the traditional manual surveying methods for infrastructure networks. However, fibreoptic network installers do not always have a strong background in geospatial technology, which can limit their awareness of how accurate data collection can significantly improve project outcomes. It may therefore be necessary for survey companies to highlight and explain the benefits of mobile mapping in the context of FTTH.

Mobile mapping has the power to transform data collection and application in FTTH

the author

by delivering unmatched efficiency and flexibility. Firstly, advanced mobile mapping systems enable high-speed data collection with a stunning level of accuracy. Capable of capturing 3D scans at a range of up to at least three to four million points per second,

Wim van Wegen is head of content at GIM International and Hydro International. In his role, he is responsible for the print and online publications of one of the world’s leading geomatics and hydrography trade media brands. He is also a contributor of columns and feature articles, and often interviews renowned experts in the geospatial industry.

these systems are designed for versatility and ease of use. They can be effortlessly mounted on vehicles or vessels, which make them ideal for mapping extensive areas. For example, a vehicle equipped with modern mobile mapping technology, driven at standard traffic speeds by a skilled twoperson team, can survey around 60-70km per day in an urban setting and over 100km in rural areas. In comparison, traditional methods like a GNSS receiver would cover just a few kilometres per day. As a result, mobile mapping boosts the speed and efficiency of data acquisition, which is also beneficial given the current labour shortages, plus it eliminates the safety risks associated with surveyors working on open roads.

An example of the components of an advanced mobile mapping system designed for geospatial data acquisition tailored to fibre-to-the-premises (FTTP) projects. (Image courtesy: Trimble)
About
The fibre-optic cable is laid into a house, completing the last few metres to the customer in the metropolitan region of Stuttgart, Germany. (Image courtesy: Deutsche Telekom)

PAS Solutions

Aerial solutions for unrivaled image quality, maximum reliability and productivity.

Compact Solutions

Compact

For those performing local and regional planning with small-to-mediumperformance aircraft and aiming to achieve high-resolution data capturedwhile having a compact and light system.

Full-size Solutions

Full-size

For those creating high precision 3D city models, country-wide planning with medium-to-large surveying aircraft and with the need of using a broad GSD range.

Secondly, mobile mapping systems harnessing innovative Lidar technology enable planners and designers to gain a comprehensive view of real-life field conditions from their offices, significantly reducing the need for on-site visits and therefore saving on associated costs such as travel and accommodation expenses for workers. This is because these advanced systems deliver extraordinary point density, 360° imagery, and are equipped with integrated GPS antennas and IMU systems, ensuring that every detail of the environment is precisely recorded. The resulting highly accurate georeferenced point clouds and imagery offer a comprehensive dataset for various uses, such as map creation, 360° visualizations and asset documentation. The data is provided at centimetre-level accuracy and can be integrated into GIS systems. This enables changes in the environment to be automatically identified during the entire project and the right decisions to be made every time (from design and planning phase to implementation and delivery, including for maintenance purposes afterwards). All of this enhances the speed, efficiency and quality of the total project. Needless to say, there is no one-size-fits-all solution for capturing the environment as part of an FTTH project. In some situations, there are valid reasons for not choosing Lidar mapping. For instance, while working with Lidar technology can deliver high-end accuracy, it can also be more time-consuming, not to mention significantly more expensive than other mobile mapping methods. Another element at play is the type of information needed, which can differ greatly depending on the context. In the case of hanging

FTTH cables that are suspended from posts or buildings, the primary concern is whether there is enough room to add more equipment. Sometimes it can be faster and more cost effective to use just imagery, coupled with GNSS/INS.

Mobile mapping, digital twins and AI Interestingly for the geospatial sector, the strategic use of advanced mobile mapping tools to optimize FTTP broadband rollout strategies was highlighted at the FTTH Conference 2024 in Berlin, Germany. Industry experts highlighted how Lidar

technology, integrated with mobile mapping systems on vehicles, plays a crucial role when creating accurate digital twins of potential deployment areas. This approach is broader than one would imagine at first glance. Firstly, it helps when assessing the feasibility of new rollouts by providing precise data on the number of premises within target zones – often surpassing the granularity of public records. Additionally, it informs critical decisions about where to invest.

Moreover, the discussion emphasized how artificial intelligence (AI) further refines this

Percentage of the total fixed broadband connections that are fibre-optic connections, as of December 2023. (Image courtesy: OECD)
Point cloud of a 15km FTTH project trajectory in Quebec, captured using a mobile mapping system. (Image courtesy: Trimble)

process. By analysing data collected through mobile mapping, AI can identify surface materials like asphalt, which significantly impact the cost projections for network expansion. These practical insights, shared by leading professionals during the conference, underscored the transformative potential of combining mobile mapping with digital twin technologies to drive greater efficiency and cost effectiveness in fibre-optic deployment and network operations. Clearly, the geospatial industry is well positioned to leverage cutting-edge technology and contribute to shaping the future of FTTH.

Examples

from around the world

Canada: setting new efficiency standards

In Canada, where many FTTH cables are hung on poles instead of dug into the ground, Cansel has played a pivotal role in advancing FTTH projects by integrating cutting-edge mobile mapping solutions and streamlined workflows. Since 2019, with the acquisition of its first Trimble MX9 unit, Cansel has transformed the FTTH survey and data collection process. This started with pilot projects that demonstrated the company’s capability to collect extensive data rapidly. One 15km survey was completed in just 90 minutes, whereas it could conventionally take up to 75 hours. Cansel’s methodology involves three critical steps: data acquisition by using the Trimble MX9, data processing with tools like POSPac and TBC for trajectory processing and point cloud creation, and data extraction using the Trimble MX Asset Manager. This sophisticated system allows for semi-automatic to automatic extraction of key pole features necessary for FTTH deployment. Notably, Cansel’s process significantly reduces manual labour, with up to 60% time savings in the overall survey process and over 35% savings in data extraction. This workflow automates the conversion of collected data into formats required for permit requests and CAD drawings, ensuring precision and efficiency. The approach has proven to be particularly beneficial in rural areas where high-speed

internet demand is critical, addressing challenges like limited human resources and seasonal work constraints. Overall, Cansel’s innovative integration of technology and automation serves as a good example of setting new standards in the FTTH industry, enhancing productivity and accuracy in land surveying and telecom infrastructure projects across Canada, including in remote areas.

USA: efficient resolution of insurance claims

An engineering firm based in Colorado, USA, used the Trimble MX7 mobile imaging system in an FTTH project to capture detailed, georeferenced, time-stamped images of 30,000 sites prior to construction. This methodology provided crucial preconstruction evidence for insurance claims, minimizing disputes and ensuring accurate claim assessments. Besides enhancing project management, it proved cost effective by facilitating efficient resolution of construction-related damage claims. This innovative approach can serve as a model for future FTTH and civil engineering projects.

Germany: leveraging AI and computer vision

Deutsche Glasfaser is one of the leading market players in FTTH network expansion in Germany. The group focuses on nationwide fibre-optic supply to rural and suburban areas. The company has recently integrated the cutting-edge CityMapper platform from Dutch innovator Horus, significantly enhancing the stability and productivity of its mobile mapping fleet. The Horus modular CityMapper platform offers unparalleled flexibility and future-proof system integration, allowing customers like Deutsche Glasfaser to build upon their initial investment in mobile mapping system technology. The modular CityMapper system boasts an impressive array of options, including Teledyne FLIR’s advanced Ladybug5+ and Ladybug6 cameras, A700 thermal imagery, and Blackfly highresolution cameras. Combined with Hesai XT32 Lidar and Applanix GNSS technology, it delivers precise data collection of streetlevel imagery with exceptional clarity and detail at an affordable price. The MERCAIDO software suite enhances this by leveraging AI and computer vision to automate feature extraction and surface segmentation

In an FTTH project in Colorado, the Trimble MX7 mobile imaging system was chosen for the imaging tasks. (Image courtesy: Trimble)

for speeding up cost calculations. This suite connects AI models with street-level imagery to detect, geolocate and measure objects, producing actionable GIS data. This ensures up-to-date insights and reliable cable routing information, minimizing uncertainties and cost of failure, and maximizing bidding speed and project success for companies like Deutsche Glasfaser.

The Netherlands: AI-supported segmentation and extraction

In the Netherlands, the home country of Horus, BAM Infra considerably enhanced its FTTH projects by using Horus technology. Faced with time constraints during tender processes and the need to minimize financial risks, BAM integrated the advanced Horus CityMapper system. This cutting-edge mobile mapping solution, equipped with FLIR Ladybug5+ 360-degree cameras, Velodyne Lidar, Applanix GNSS/ INS, and four high-resolution cameras, enables BAM to capture detailed streetlevel imagery with exceptional precision. The true power of this implementation lies in its AI-driven analysis, supporting automated surface segmentation and feature extraction. This technological synergy produces valuable, actionable

insights that significantly streamline both the design and construction stages. The impact is substantial: project assessment time has been slashed from 18 days to just three days, providing a clear situational overview without surprises. By generating rapid, reliable calculations and GIS data, this innovative approach not only reduces financial risks, but also significantly enhances decisionmaking processes. As a result, BAM Infra has reinforced its position as a leader in sustainable telecommunications infrastructure development, showcasing the transformative potential of AI and geospatial technology in modernizing FTTH network deployment.

Conclusion

Installation activities for fibre-optic networks are accelerating across the globe, often supported by government programmes to develop fast internet infrastructure as a foundation for the digital future. As a result, 360° imagery has become essential for the precise planning and implementation of FTTH projects. However, many countries around the world lack the necessary highly detailed data of property locations, densities, boundaries and surrounding

infrastructure. The absence of this information makes planning FTTH networks extremely difficult for internet service providers. Advanced mobile mapping technologies can contribute to ensuring that every cable is installed accurately and efficiently, supporting the rapid and reliable rollout of fibre-optic networks worldwide.

As highlighted in this article, the global push to offer citizens and businesses widespread access to fast internet presents a significant opportunity for the geospatial community. This transition relies on a high-quality, largely automated workflow that captures spatial data using mobile sensors to record highresolution point clouds and 360° panoramic images. Laser scans provide depth information and support AI-based object classification. The technology discussed enables detailed recording and analysis of the physical environment, including roads, terrain, buildings, vegetation and other geographic features. The expanding role of geospatial technology and data in this process to drive connectivity offers exciting opportunities for surveying companies, provided that they can successfully convince potential customers of the benefits of mobile mapping.

Screenshot of the MERCAIDO software suite, developed by Horus. This solution combines AI with street-level imagery to simplify the process of locating and measuring objects with precision for FTTH projects, among others. (Image courtesy: Horus)

SPACE FLIGHT LABORATORY

How advanced modules are revolutionizing accuracy and workflow efficiency

Transforming 3D data handling with 3Dsurvey

The large variety of 3D data collection methods may be hard to grasp at times, with new trends and technologies emerging every other day. Drones are revolutionizing data capture with access to inaccessible places, and automation is taking the industry by storm. Where there are large steps to be made, is in the maximizing of the value of data by combining data sources and types and managing this in a practical manner. Combining sources may not only lead to better quality, but also easier analysis and greater trust in data and value for its users. 3Dsurvey software unlocks the potential of this 3D data by enabling the combining of types of data and information from acquisition to delivery.

In this article, we look at two 3Dsurvey modules that aim to improve data collection, interpretation and analysis. These modules help surveyors to gather higher quality data with less effort and time spent in the field, following 3Dsurvey’s mission to offer a powerful spatial data processing tool in a simple-to-use and all-in-one platform.

3Dsurvey Scan module with FARO Orbis

In the heart of Prague, a historic city known for its architectural splendour, a unique project unfolded inside an old house poised for renovation. Tasked with creating detailed floor maps and crosssections for construction documentation and design groundwork, the team at 3gon Positioning (https://cz.3gon.eu/) utilized 3Dsurvey software to realize a stunning result. This case study delves into the seamless integration of cutting-edge SLAM technology and the innovative 3Dsurvey Scan module, which simplified the data collection process using a handheld 3D scanner. With no need for extensive field preparations and the ability to easily process data from various scanner formats, the team achieved remarkable productivity, demonstrating the transformative potential of modern surveying methods in the realm of construction and design.

Objective

The primary objective of the project was to create detailed floor maps and

cross-sections of a suite for construction documentation and as a basis for designers. The project aimed to leverage the software’s ability to generate X-ray views, which simplifies the drawing process by allowing the creation of floor maps directly on top of the point cloud data. The goal was to enhance efficiency in creating deliverables for clients while ensuring robust and precise data throughout the project.

Approach

Leveraging the capabilities of the 3Dsurvey Scan module, operators were able to initiate data collection immediately, bypassing the

need for preliminary setup or calibration. The new module is compatible with a large variety of 3D laser scan formats, empowering surveyors to collect and work with data in-field with any ordinary laser scanner.

To ensure accuracy and completeness of the data set during data collection, control points are brought into the loop. These control points, measured directly by the scanner, provided fixed reference points critical for the mobile scanner’s trajectory. This step was instrumental in refining the trajectory, resulting in a consistent and

Figure 1: Scan or photogrammetry, point cloud or mesh: the integrated CAD tool can be used to extract data.

precise point cloud for the entire scope of the project.

The alternative approach would have been to use 3D CAD software to create a BIM model, based on which a specialist would create floor plans. As this is not a one-off process but repeated many times, the pay-off, which starts with the time gained in the first project, will increase rapidly.

Data processing

During the data processing phase, smooth tooling can help a project maintain its momentum. 3Dsurvey’s Scan module not only ensures that laser scanning projects stay on schedule, but also makes data processing efficient, insightful and even enjoyable. The added value of the 3Dsurvey Scan module during the data processing phase is its improved efficiency at many levels.

The module is easy to use, enabling even untrained 3D specialists to utilize the software and analyse the data. This saves time in-field due to independence of specialists, as roles in the team can be more freely divided.

As the 3Dsurvey Scan module incorporates CAD with full 3D scans, there will be far fewer cases of having to return to the field as all data is measured in the initial run, and remeasuring for CAD can be done at the office. The modules’ ability to provide X-ray views and draw CAD directly on the point cloud gives surveyors and data specialists all the tools they need to perform the job right and efficiently, first time round.

The Scan module is revolutionary as it offers seamless integration of 3D scanning and CAD drawing during the data processing phase of projects. The inspection site can be brought to the office, providing full control over data without being tied to specific scanner brands or types. This flexibility and universality streamlines workflows, enhances efficiency, and opens up new possibilities in data utilization.

Results

The use of 3Dsurvey software had a tremendous impact on the overall project outcomes. It allowed easy and fast drawing on the point cloud, without the need for trained 3D professionals. In terms of project efficiency, accuracy and cost, there was a significant improvement. For example, in a

small suite project, the time required using a 3D scanner and 3Dsurvey was only three hours compared to ten hours using classical survey and CAD methods. When compared to alternative methods, the results using 3Dsurvey were comparable, with the same final deliverable (2D plan). However, the advantage of 3Dsurvey was its ability to quickly modify and add additional details if requested by customers. The functions of 3Dsurvey simplified and enhanced the task, allowing users to open the 3D point cloud, create X-ray views and draw details. This ease-of-use for surveyors is something that characterizes 3Dsurvey software.

The potential uses of 3Dsurvey photogrammetry software in future projects are vast. It is a modern tool that can be applied to a wide range of applications. Potential uses include mapping, volume calculations, digital terrain models, construction progress comparison, photovoltaic inspections and design, CAD drawing of facades, floor plans and cross-sections of buildings, point cloud presentations and editing, and data presentation on the cloud.

3Dsurvey RTK Videogrammetry module with Emlid GNSS

Utilizing the combination of a mobile phone and a precise RTK GNSS antenna, RTK videogrammetry allows for the recording of video footage with exact positioning in centimetres. The resulting video, embedded with accurate location data, is then processed through 3Dsurvey software to

Figure 3: Use RTK videogrammetry for quick site documentation and enhance it with integrated CAD.
Figure 2: X-Ray view empowers users to generate floor plans with ease.

create detailed 3D point clouds and meshes. This technique is particularly advantageous in areas where drone flights are not feasible, offering a quick and user-friendly alternative. With the ability to be operated by virtually anyone, RTK videogrammetry bridges the gap between traditional surveying and construction professionals, providing a costeffective and efficient solution for a variety of applications, from volume calculations to construction documentation.

Objective

The primary goal of the project was to start using 3D data with simple and cost-effective tools, also on smaller projects. The aim was to document and capture the 3D reality in sufficient detail and precision using this accessible technology.

Approach

RTK videogrammetry has the benefit of requiring minimal preparation. However, when working in busy urban areas and construction sites, surveyors should plan their routes to first ensure their own safety, and second prevent any obstructions from blocking the frame during data capture.

The equipment used in conjunction with the 3Dsurvey software included Emlid GNSS for RTK position in centimetres and an Android phone. The equipment was set up and calibrated by measuring the offset of the phone and the GNSS antenna and entering the values in the 3Dsurvey application.

During data collection, the RTK videogrammetry functions were utilized by capturing video footage of the site. Ground control points were placed on the site to ensure accuracy and completeness of the data. Smooth and steady movement during data capture was emphasized to achieve the best results: the trick is to think of yourself as a drone!

One of the challenges when using RTK videogrammetry is loss of RTK position. Solutions can be as simple as planning routes with few overhead obstructions, but also placing ground points as references beforehand where you know you will lose signal.

Data processing

3Dsurvey’s features make the processing of RTK videogrammetry similar to

processing drone data, and intuitive for 3Dsurvey users. Easy and efficient data processing for videogrammetry is crucial when working on construction sites, as data collection can be a risky process and results need to be fast and reliable. The use of the 3Dsurvey RTK Videogrammetry module improved efficiency and accuracy compared to other methods, by offering a new way to capture 3D reality. It eliminates the need for a surveyor with expensive tools for every measurement. This provides more freedom and reduces dependency on surveyors, without impacting the reliability of the data.

As for any data processing software and workflow, the quality of the result relies on the quality of the incoming data. It is therefore crucial to ensure smooth and sharp video material, as well as ground points to cover potential loss of RTK position. 3Dsurvey software with the help of the RTK Videogrammetry module will then take care of the rest.

Results

The quality and accuracy of the data obtained through RTK videogrammetry functions are ideal for mapping and documentation purposes, and during the projects provided a precision of up to 5cm. The level of detail captured allowed for the selection of points directly on the images, even if they were not visible in the point cloud.

The use of 3Dsurvey software significantly impacted the project outcomes by saving time, enabling data capture by construction/ utility personnel, reducing delays on construction sites, and saving money for contractors. The results achieved using 3Dsurvey were comparable to alternative methods, with the software’s functions simplifying and enhancing the overall task.

During this project, 3Dsurvey software contributed to the success of the project due to the simplicity of the mobile application, GNSS device and data processing in the module. The ability to pick points from images and the point cloud provided flexibility and ensured that necessary details were captured.

Conclusion

The modern-day surveyor is bombarded with new technologies, tools and software. 3Dsurvey aims to be the guiding light in this market, helping surveyors to get the most out of their time and data. The Scan and RTK Videogrammetry modules are two examples of developments made by surveyors, for surveyors. Our team brings field experience to the software and helps the modernday surveyor utilize the latest surveying technologies in a practical and easy manner.

Figure 4: With the right approach, even larger buildings can be scanned and imported into 3Dsurvey.

Managing vital ecosystems from above

Crewed aerial surveying: a key tool in modern forest monitoring

With the planet’s forests facing numerous threats, advanced monitoring techniques are essential to preserve these crucial ecosystems. Crewed aerial platforms offer significant benefits in terms of flexibility, efficiency and coverage, as illustrated by the five real-world cases outlined in this article.

Covering 31% of the Earth’s surface, forests are crucial ecosystems supporting over 80% of terrestrial biodiversity. In addition, they provide livelihoods for 1.6 billion people, offering both tangible resources – such as food and fuel – and intangible benefits like spiritual and cultural significance. However, climate change threatens these ecosystems, increasing the frequency of extreme weather events and making trees more vulnerable to pests and diseases. In Europe, species like spruce, beech and pine face significant habitat loss. Forests are also impacted by land use changes, urbanization and pollution.

Technology has emerged as a powerful ally in forest monitoring and management in response to these challenges. The Food and Agriculture Organization of the United Nations (FAO), which has been monitoring the world’s forests since 1946, emphasizes the critical role of rapid, rigorous and scalable forest monitoring tools in supporting data-driven policies. Remote sensing, in particular, has become an indispensable technology in forestry, offering valuable insights into forest dynamics, health and changes over time.

Remote sensing technologies in forest mapping

Remote sensing technologies have transformed forest mapping across various platforms. In forest monitoring, applying photogrammetric techniques and data captured by different types of equipment is essential for generating detailed maps, measuring tree heights and crown diameters, assessing forest damage, and monitoring changes in forest structure over time. Multispectral and hyperspectral cameras

Figure 1: Data acquisition at a fixed resolution of 2cm GSD was achieved at a rate of approximately 40km² per hour of flight. Phase One is an observer of the European Association of Aerial Surveying Industries (EAASI). (Image courtesy: Phase One/Alter Eye)

capture the invisible signatures of vegetation health and species composition. At the same time, thermal sensors detect subtle heat variations that can indicate stress or disease, and also offer a significant advantage in wildfire management by providing real-time, accurate information about fire behaviour and conditions. Lidar systems, the pillars of forest structure mapping, create intricate 3D canopy architecture and biomass distribution models. Complementing these, microwave sensors like synthetic aperture radar (SAR) penetrate cloud cover and dense canopies, offering insights even in challenging weather conditions. The resolution of SAR is low compared to other technologies, but it can be helpful in high cloud-cover areas. While satellites offer broad coverage, and uncrewed aerial vehicles (UAVs or ‘drones’) provide high-resolution data for smaller

areas, crewed aerial platforms strike a balance. For example, they offer flexibility in flight altitude, and a substantial payload capacity that allows them to carry largeformat sensors or multiple smaller advanced sensors simultaneously. This facilitates tailored, multi-layered data collection crucial for comprehensive forest monitoring. Their capacity for rapid response, extended flight duration, and consistent data collection in various conditions makes them invaluable for global forest management and conservation efforts, as illustrated by the five cases below.

Case 1: High-resolution aerial photogrammetry for parasite detection in Poland

In Poland, the fir mistletoe (Viscum album) infestation poses a serious challenge to forest health and timber production.

Traditional methods of assessing this issue were slow and labour-intensive, which prompted the Polish company AlterGeo to adopt innovative solutions. It employed high-resolution aerial photogrammetry to efficiently map forest areas at a rate of about 4,000 hectares per hour. This approach is significantly more productive, being 50 to 100 times faster than comparable dronebased methods.

A key factor in this efficiency is the use of the Phase One 280 aerial system, installed on AlterGeo’s ultra-light Alter-Eye aircraft. This setup combines a large-format camera with a stabilizing mount, offering an unprecedented blend of mobility and imaging capability. The system captures images with a ground sample distance (GSD) of 2cm, delivering exceptional detail (Figure 1). The aircraft operates at low

About the author

Ada Perello is communications manager at the European Association of Aerial Surveying Industries (EAASI), which was established in 2019 to unite companies generating geographic data from crewed aerial platforms and has experienced rapid growth ever since. Prior to joining EAASI, Perello worked in external communications for organizations like IMO, FAO and the private sector. She holds a master’s degree in Journalism and International Business Administration.

altitudes and speeds, allowing for 60% longitudinal photo coverage, and uses special imaging techniques to differentiate trees based on health. This method allows AlterGeo to identify tree species, sizes and parasitic infestations with precision. The high-resolution images reveal tree health conditions down to individual branches, enabling accurate identification of infested trees and their stages of decline (Figure 2). This innovative technique has the potential to become a standard in forestry, providing more precise mapping and

Figure 3: The Finnish Forest Centre collects and shares forest data. Finland’s first inventarization took 10 years, from 2010 to 2020. The second round began in 2020 and will take six years, ending in 2025.

(Image courtesy: Metsakeskus)

Figure 2: The high-resolution photos allowed for clear identification of infected and dead trees without additional techniques. Selected processing methods emphasized the differences between various plant species and the living and non-living parts of the trees.

measurements compared to traditional field inventories.

Case 2: Lidar for forest inventory and assessment in Finland

Airborne Lidar has revolutionized forest monitoring, offering extensive coverage and detailed insights into both canopy and sub-canopy structures. Its ability to penetrate forest gaps allows for precise mapping of tree trunks, understorey and topography, enabling the derivation of crucial forest parameters. This technology enhances biodiversity assessments, biomass estimations and forest management strategies across diverse landscapes. Finland exemplifies the successful adoption of aerial Lidar for nationwide forest inventory, complemented by extensive field data collection. Since 2010, the Finnish Forest Centre (FFC) has conducted comprehensive Lidar surveys, coordinated by the national land survey organization Maanmittauslaitos (MML). This programme combines laser scanning and aerial photography on a six-year cycle, with more frequent photography in most areas. Additionally, field crews measure approximately 800 to 1,000 sample plots within each inventory area, corresponding to a Lidar block. This extensive ground-level data collection constitutes a significant portion of the overall inventory process (Figure 3).

BSF Swissphoto, a member company of the European Association of Aerial Surveying Industries (EAASI), has been instrumental in these efforts. Between 2014 and 2015, BSF Swissphoto, together with a partner,

was able to carry out large-scale laser data acquisition over approximately 45,720km2 for MML. The company was again commissioned for Lidar flights in 2018 and scanned almost 41,000km2 at 1pt/m2 in 12 sub-areas in southern to central Finland by the end of 2019. This required a total of around 250 flight hours. Since then, it has been commissioned to collect laser scan data for MML with a density of 5pt/m2 for a total area of 109,886km2 This high-density Lidar data enables precise measurement of forest parameters such as tree height, canopy density and biomass. The data is used to create detailed forest stand maps, helping forest managers make informed decisions about harvesting, conservation and sustainable forest management practices.

Case 3: Lidar and AI for storm damage assessment in Switzerland

When a severe storm struck La Chaux-deFonds in Switzerland, a swift assessment of forest damage became imperative. Sixense Helimap was tasked with conducting an urgent aerial survey via helicopter just two days after the storm. Following data acquisition, the team from the AI-powered point cloud classification platform Flai applied their FlaiNet artificial intelligence (AI) models for point cloud semantic classification. This offered a detailed view of the damage in both urban and forested areas. Both companies belong to EAASI. This approach combined aerial mapping with advanced forestry AI models for thorough classification, enabling the direct identification of points associated with fallen tree trunks –even those on the ground, at sharp angles or obscured by other

vegetation. Key steps in the process included the 2D rasterization of the fallen tree mask for efficient processing, and the vectorization of more than 10,000 fallen trees or tree parts in the surveyed area (Figure 4). The resulting vector files were crucial for forest damage management, helping to define the extent of damage and pinpoint areas needing urgent inspection. They also determined the orientation of fallen trunks, streamlining manual processes and aiding forestry operations in planning and executing effective recovery efforts.

Case 4: Hyperspectral data in forest monitoring

Developing accurate tree species identification methods is essential for assessing biodiversity and supporting forest management. With novel approaches, the extent and patterns of species diversity can be mapped and understood more precisely. Hyperspectral data significantly enhances tree species classification by leveraging detailed spectral information, improved algorithms, texture analysis, cross-sensor integration and versatile applications. Unlike multispectral data, hyperspectral imaging captures reflectance across hundreds of narrow spectral bands, enabling the identification of subtle differences in tree species’ spectral signatures. By combining these bands, it is possible to

Figure 4: Identified and vectorized trunks are shown by cyan vectors. (Image courtesy: Flai)
Figure 5: Hyperspectral data showing relative forest health in Bruneck, Italy. (Image courtesy: Group AVT)

The use of Lidar allows for the generation of highly detailed 3D models of the trees. By capturing multispectral data, the camera allows the classification of tree species and assessment of their health status. (Image courtesy: Hexagon)

obtain information on the biochemical properties of vegetation and its health status. In 2022, Group AVT, a member of EAASI, undertook a project to assess tree health in Italy’s Bruneck forest department. The company covered an area of 340km² using hyperspectral data at a GSD of 2-4m, repeating the project a year later (Figure 5). This data enabled the creation of various indices such as the Normalized Difference Vegetation Index (NDVI), Photochemical Reflectance Index (PRI), and Plant Senescence Reflectance Index (PSRI), and produced maps of tree species and dead trees. This information is useful for tracking threats like the bark beetle.

The integration of hyperspectral data with technologies such as Lidar improves classification by providing insights into tree structure and crown characteristics. Its versatility supports applications ranging from field-based identification to largescale forest monitoring, enabling tailored management and conservation strategies. For instance, tree crown segmentation can also be combined with tree species data for enhanced analysis. While satellites and UAVs can also acquire hyperspectral images, aerial platforms can cover a complete forest in a reasonable timeframe (at most a few

Further reading

7: The Leica CountryMapper’s ability to simultaneously capture imaging and Lidar data results in foundational geospatial products that are ideal for understanding forest structure. (Image courtesy: Hexagon)

days), minimizing time differences between images and offering a spatial resolution that facilitates tree identification and detailed analysis.

Case 5: A digital twin of the rainforest in La Gamba, Costa Rica

Green Cubes, an initiative by Hexagon’s green-tech subsidiary R-evolution, is pioneering the creation of a digital twin of the rainforest in La Gamba, Costa Rica, to support ongoing monitoring and protection efforts. Green Cubes are sponsorable assets that corporate entities can purchase as a way of supporting biodiversity conservation and fulfilling their environmental, social and governance (ESG) commitments. In partnership with La Gamba Tropenstation, an Austrian research station associated with the University of Vienna, R-evolution is mapping 500 hectares to establish the first 125 million Green Cubes.

By employing cutting-edge airborne technology from Leica Geosystems alongside terrestrial Lidar scanners, the project accurately measures and virtually visualizes the rainforest, ensuring continuous conservation efforts. This initiative not only encourages collaboration between scientists and the local community, but

Lidar technology for scalable forest inventory, GIM International Vol. 37, issue 3 gim-international.com/content/article/lidar-technology-for-scalable-forest-inventory

Lidar surveys in Finland, metsakeskus.fi/en/open-forest-and-nature-information/ collection-of-forest-resource-information

National Land Survey of Finland, https://www.maanmittauslaitos.fi/en

also offers forest owners sustainable income opportunities. The project utilizes the Leica CountryMapper hybrid airborne system, which uniquely combines Lidar and large-format imagery within a single sensor to generate a comprehensive 3D digital landscape of the rainforest (Figures 6 and 7). This system facilitates the quantification of forest volume and the monitoring of vegetation changes over time. By capturing image data across multiple spectral bands, the system registers these with Lidar data to create detailed representations of the rainforest canopy, constructing an index of various species. The data is further refined through integration with high-resolution ground-truthing data from the Leica BLK2GO terrestrial Lidar scanner, setting new benchmarks for analysing tree biomass volume and diameter.

Conclusion

As climate change increasingly threatens forests, advanced monitoring techniques are essential. Crewed aerial platforms offer significant benefits in data collection by efficiently covering large areas and providing broad coverage in a single flight. Their flexibility allows multiple sensors to be deployed simultaneously, collecting comprehensive data across various spectral bands. This capability is advantageous for generating diverse datasets and enhancing the depth and breadth of analysis. Crewed platforms are also valuable in emergencies, offering rapid response and timely data collection crucial for effective disaster assessment and management. The partners of EAASI lead this technological advancement, helping to preserve and sustainably manage forests for future generations.

Figure 6:
Figure

Coverage so much greater than GNSS

A complete suite of precise positioning solutions that exceed your farthest goals

Centimeter-Level Positioning Accuracy, Anywhere

With local RTK base stations, VRS networks and Trimble’s worldwide CenterPoint® RTX correction service via satellite or cellular/IP

Longer Product Life Cycles

Durable, completely supported future-proof solutions that last as you grow

Precise, Consistent Performance

Trimble ProPoint® GNSS technology leverages all constellations and frequencies with tight INS integration and sensor fusion to deliver unparalleled performance under challenging environments

Global Support Get expert help through the entire lifecycle, wherever you are

oemgnss.trimble.com | sales.gnssoem@trimble.com OEM GNSS Division, 10368 Westmoor Drive, Westminster, CO, USA

Collaboration, Innovation and Resilience Championing a Digital Generation Brisbane, Australia 6-10 April

Flexible Integration

Scalable technology platforms for the full autonomy stack and faster go-to-market implementation

• Learn and share globally

• Network across related professions

• Make an impact on your career

• Enjoy undisturbed face-to-face time

Join us in Australia for FIG Working Week 2025, where you'll gain unparalleled access to the international surveying and geospatial community. This event o ers a platform to exchange experiences and stay at the forefront of the industry, covering topics such as innovation, resilience, and sustainability. Why attend? Find out more here www. g.net/ g2025/

Geodata insights with no expertise and no code, in no time

High-frequency mapping for everybody

The news is nowadays all about innovations on the AI front, and geoinformation is no exception. Today, geodata users can get insights about which objects are present, where and what has changed – with no geoAI knowledge or code required. And the data can be processed as fast as it can be collected. The credits for tearing down some of the last-remaining walls in geospatial complexity go to the fast-paced company Blackshark.ai.

Blackshark.ai is a geospatial metaverse company building a photorealistic 3D digital twin of the planet by applying artificial intelligence/machine learning (AI/ML) to satellite and aerial imagery. In the annual Products That Count Awards, the company’s new no-code object detection platform, ORCA HUNTR, was chosen as a ‘2024 Best Product’ in the AI & Data category – despite only having been launched in December 2023. “In no time, it has been adopted by multiple customers across defence and intelligence, civilian government, and sectors such as insurance, mining, energy and public safety,” says Gastao De Figueiredo. He is senior vice-president and responsible for product management at Blackshark.ai, based in Seattle, USA. “The ORCA HUNTR platform enables truly anyone to develop their own models for object detection and classification. If you can play with crayons or paintbrushes, you can build your own object detection model. This empowers non-technical users and AI/ ML specialists alike to build new models very quickly and with sparse data. We have also built and maintain our own global models focused on buildings, vegetation and roads,” he states.

Speed

Four years ago, mapping every object on Earth every 72 hours was a huge achievement. In 2020, Blackshark.ai did just this for Microsoft, who wanted to be able to extract features (buildings, vegetation) over the whole planet. In 2024, that is not so extraordinary any more. “We can do it faster now thanks to the availability of more powerful graphics processing units (GPUs). We refer to it as high-frequency

Frédérique Coumans is senior editor for GIM International. For more than 25 years, she has been covering all aspects of spatial data infrastructures as editor-in-chief of various magazines on GIS, data mining and the use of GIS in business. She lives near Brussels, Belgium.

mapping: we give customers the ability to process very large quantities of geospatial data and detect objects, extract features or monitor change. They can process the data as fast as they can collect it,” continues De Figueiredo.

For example, when earthquakes destroyed parts of Syria and Turkey in 2023, a map of impacted buildings was made and delivered to humanitarian aid organizations within less than one hour of obtaining the new satellite imagery. Other examples include customers that want to process imagery daily to keep track of assets around a large construction site, a mine or an oilfield. De Figueiredo explains: “One of our customers is concerned about detecting objects that, when in proximity to buildings, represent an insurance risk – such as a trampoline near a house, or wooden pallets piled up near a commercial or industrial building. They have developed their own trampoline or wooden pallet detection models and run them over all the areas where they may have existing customers or prospective customers. Another customer is concerned about monitoring encroachment on their gas pipelines. There are no-code model examples in all kinds of dynamic situations.”

Training

Several AI companies in the market use AI to extract data from geospatial and other sources, transform it and sell the transformed data to customers. But Blackshark.ai is not the owner or provider of data. Instead, it licenses its software to customers, who then apply it to imagery they already have or have the rights to access, train a selfmade model with it and obtain the insights they need. The models can be built in a matter of days or even hours (see Figures 1 and 2).

Accuracy

AI model accuracy is dependent on the quality of the input and the training. It is measured using specific statistical indicators such as Intersection over Union (IoU). This indicates what percentage of the pixels identified by the model match the actual correct pixels in the image, for example. Accuracy is of the essence for Blackshark. ai since the company has many customers that rely on its products for mission-critical planning and operational support, such as in the

About the author

Figure 1: A screenshot of the platform. The customer works on the left-hand side, marking the images to signify ‘These are the objects I’m interested in’ or ‘These are not the objects I’m interested in’. In this example, the model is being trained to detect air-conditioning units on a rooftop. The image on the righthand side is the real-time response from the AI model. The user sees exactly what the model is learning, as it is learning. This helps customers build reliable, trusted and explainable AI models.

defence and intelligence space. “For our nocode object detection platform, these figures are going to vary from customer to customer depending on the imagery and training they did. A recent test conducted by a top global insurer indicated they were able to reach an IoU score of over 70% by a single user in under two days of training. We test our own global building detection model against control datasets all over the world, as the accuracy will vary with specific details such as the off-nadir collection angle, for example. The average global accuracy is better than 94% without any human correction,” states the senior vice-president.

Ethics

At Blackshark.ai they are sensitive to the possibility of misuse. “We’ve created an Ethical Use of AI Policy in our company and also have an Ethics Advisory Board, which includes a leading researcher and lecturer on ethics and technology as a consultant to the board. Anyone of our team members can raise issues to the Ethics Advisory Board if they have questions about potential misuse cases or applications of our technology,” comments Gastao De Figueiredo. He adds: “Our End User License Agreements contain

The average global accuracy is better than 94% without any human correction

Gastao De Figueiredo MSc leads the geospatial intelligence product line for Blackshark.ai. He previously held leadership positions at Microsoft, Apple and Sun Microsystems.

2: A screenshot of the platform. In this case, the user is creating a model to detect and classify surfaces as either ‘Vegetation’ in red, ‘Water’ in yellow, ‘Roads’ in green or ‘Everything else’ in magenta. In the same approach as in Figure 1, the user works by applying brush strokes or polygons on the left-hand side, and on the right-hand side is the real-time answer from the AI model.

restrictions that prohibit the use of our technology in contravention of our Ethical Use of AI Policy.”

Another issue is the AI-related energy consumption of data centres. He doesn’t deny this concern, but estimates that the overall balance is positive: “There is no putting this genie back into the bottle. AI and computing in general have a considerable impact on creating efficiencies for us as a species, so I believe that while the growth in consumption is happening in the short term, this will in turn be offset by the broader application of AI and computing in general to help us become more efficient in other energy consumption and generation areas. Consider the effects of more efficient public transportation and supply chains by using AI to optimize routes, schedules and even to help the design of next-generation equipment that is more efficient.”

Companion intelligence

When talking about possible downsides of AI, Gastao De Figueiredo stays nuanced. “I prefer to think in positive terms. I frame this based on a quote from Ludwig Wittgenstein: ‘The limits of my language are the limits of my world’. As to the extent that AI can process information volumes and create connections across symbols and structures that we humans don’t understand, I see a possible scenario where this ‘companion intelligence’ is there to help us expand the limits of our world – from AI-powered real-time translation that lets us converse with another human being in any language no matter where they are, to AI helping us develop new mathematical ‘languages’ to unlock mysteries of the universe or understand the ‘language’ of biochemical reactions at a cellular or molecular level,” he says. “But as with every

technology throughout the history of humankind, there’s a possible downside. I hope that this ‘companion intelligence’ is not weaponized by bad actors to control or restrict human freedoms or harm human existence. It is a very sobering prospect to think that regimes that place a different value on human life and freedoms are now investing heavily to master AI as well.”

Pressure

In terms of growth perspectives, De Figueiredo sees two big trends. “First, the amount of geospatial data available is increasing exponentially. The cost of launching new constellations is lower and we are seeing an exciting increase, not only in capacity of collection, but also in quality and sensor types: higher resolution, higher revisit rates and more spectral bands. At the same time, aerial collection sensors continue to improve their resolution and capability,” he states. “The second trend is unfortunately driven by the geopolitical pressures we see mounting in the world and the return to protective/competitive trade policies. These two trends combined, create intense pressure for governments and corporations alike to need more and faster insights from geospatial data, which drives the need for more AI and more advanced models. An ensemble of AI models working together to help track important changes or events can increase the resiliency of the environment, or of supply chains, or the safety of people in light of man-made or natural phenomena. A scenario where AI-powered change detection models automatically alert forest management agencies to the possibility of a wildfire based on sensing vegetation changes, degrading moisture levels and the presence of, say, electrical equipment in the vicinity, is within reach today,” he concludes.

Figure

Mapping new horizons: advancing RTK capabilities with Hovermap’s versatility

In the field of 3D scanning and surveying, Emesent’s real-time kinematics (RTK) solution for Hovermap stands out. By eliminating or reducing the need to lay out and georeference ground control points (GCPs), RTK significantly accelerates the process from scanning to actionable insights, and overcomes scenarios where placing ground control targets is impractical. Emesent has now advanced this capability even further.

As every surveyor knows, manually georeferencing and aligning large scans is a time-consuming and labour-intensive process, and locating control points within a survey can be challenging. Emesent’s RTK solution for the Hovermap Lidar mapping system addresses this by automating high-accuracy georeferencing and drift correction, streamlining workflows and minimizing the potential for error. Accuracy is further enhanced by intelligently leveraging a combination of both RTK and SLAM technologies to optimize results and ensure the most reliable, robust point cloud.

Intelligent georeferencing

Emesent’s processing and visualization software, Aura, intelligently balances RTK and SLAM data based on the best position quality to deliver the highest-quality results. Aura chooses RTK when corrections are favourable and automatically switches to SLAM when they are not. This has the additional benefit of providing a more accurate result in environments that are traditionally challenging for SLAM, such as very large areas with limited features or long linear repetitive environments.

The integration of 360-degree colourization adds further context to the point cloud, uncovering richer insights and aiding stakeholder understanding. The same scan can be used to extract 360-degree images which can be viewed side by side with the point cloud. Images

Emesent’s Hovermap Backpack RTK Kit.

are automatically georeferenced, making it easier to draw up the point cloud into building information modelling (BIM) software such as Revit.

Designed with versatility in mind Hovermap has been designed with versatility in mind, enabling the same device to be deployed in multiple ways to address any scanning requirement. Similar principles have been extended to Emesent’s RTK solution for Hovermap, allowing RTK to be leveraged on a drone, vehicle and backpack, giving clients flexible deployment options and enabling them to choose the best RTK platform for their environment. Clients can also monitor RTK quality in real time in the Commander app, providing confidence that a high-quality output will be achieved.

Validated results

To provide clients with the confidence to use the RTK solution on their highest-accuracy projects, Emesent partnered with Orion Spatial Solutions to independently validate the accuracy of the Vehicle RTK Kit. This seamlessly connects the Emlid RS2/2+ and the latest RS3 to the Hovermap and attaches securely to any vehicle via magnetic or suction cup feet. Data was captured along a 2km stretch of road in Brisbane, Australia, with the Vehicle RTK Kit mounted to the back of a van to give good visibility of the road surface. Six passes were completed in either direction at approximately 50km/h, with a total capture time of 50 minutes. The baseline length was a maximum of 1km, with the Emlid RS3 rover connected to a local base station. The data was reprojected to GDA2020 MGA Zone 56 and a geoid applied (AHD). No ground control was used in the processing of the dataset. The dataset was compared against 75 quality control points which Orion Spatial Solutions established on the 2km stretch of road using a Leica TS13 total station and Leica LS15 digital level, in accordance with Queensland Government Transport and Main Roads Surveying Standards. The results are shown in Figure 1.

Real-life applications

Today, RTK is being successfully leveraged across multiple applications, including:

• Change detection and repeatability for mapping and surveying: RTK streamlines the survey process by reducing time and improving accuracy. Its georeferencing capability enables easy overlay of repeat scans, facilitating insights into environmental changes and site developments.

Faster, easier scanning for forestry and topographic mapping: In expansive and rugged terrains where deploying numerous GCPs is challenging, RTK offers a practical solution by removing the need for such targets.

Enhanced accuracy for large, featureless environments: RTK excels in featureless environments such as open fields, stadiums and quarries, where traditional SLAM techniques might struggle due to the lack of distinct features.

Safer scanning in high-risk or challenging areas: RTK is particularly advantageous in hazardous locations, like coastal areas with eroded edges, where placing targets would be unsafe or impractical.

Scanning in inaccessible areas: In urban environments or areas with drone restrictions, RTK-equipped vehicles offer a viable alternative, enabling effective data collection where drones cannot operate.

Large project areas: For extensive projects such as roadway developments, road condition inspections, urban planning and civil engineering, RTK provides the necessary accuracy and efficiency.

95% of points are within 29mm horizontal and 18mm vertical distance of

For use cases where speed and ease of capture is more important than accuracy, the Emesent Backpack RTK Kit has been launched. This allows georeferencing in GPS-denied environments by starting a scan outdoors using RTK and then seamlessly transitioning to SLAM capture indoors during the same scan, without requiring indoor setup of control points using a total station to georeference the point cloud. This method of capture is also beneficial in forestry applications, construction progress tracking, and any project where small areas of the site have limited or no GPS.

Rapid results in case of natural disasters: When responding to events such as landslides or coastal erosion, RTK facilitates swift and safe scanning, allowing for prompt analysis and response for first responders.

Leveraging the best of RTK and SLAM

Delivering automated georeferencing and drift correction, Emesent’s RTK solution represents a step forward in geospatial mapping technology, providing a robust, efficient and highly accurate surveying solution for diverse scanning needs.

The Vehicle RTK solution achieved the following global accuracy:
Figure 1: The results from the independent validation of the accuracy of the Vehicle RTK Kit.
Emesent’s Vehicle RTK Kit.

Nokia’s vision for industrial-grade drones

Integrating connectivity and UAVs

Nokia hopes to leverage its communication technology expertise to make a significant impact in the market for uncrewed aerial vehicles (UAVs or drones). Thomas Eder, head of Nokia’s Embedded Wireless Solutions, is leading the introduction of Nokia Drone Networks along with industrial and mission-critical private 5G networks. In this interview, he discusses Nokia’s strategic move into high-end UAV solutions for sectors like mining and oil & gas. Eder emphasizes Nokia’s collaborations with geospatial industry leaders, such as YellowScan, to create integrated, turnkey solutions that push the boundaries of innovation.

At what stage of its journey to become a drone manufacturer does Nokia currently find itself?

We specialize in areas where precise mapping is essential, using high-quality sensors and technologies to produce detailed maps, particularly in situations that require linear measurements or scanning. We have already taken some general steps in the sector, such as Citymesh in Belgium and Swisscom Broadcast in Switzerland for instance (both nationwide drone networks for which Nokia’s turnkey 5G-connected drone

platform was selected, Ed.) But as a drone manufacturer we are specifically targeting the verticals of mining and oil & gas.

In view of Nokia’s role as a telecommunications infrastructure provider, how is the company positioning itself strategically in the UAV market?

There tend to be ‘bubbles’ of companies, such as groups of drone manufacturers and groups of telecommunications and IT providers. We are one of the few organizations that can bring all those

aspects together in a single company, and that gives our solution such strength. On top of that, we are used to working for service providers. As Nokia, we’ve always worked for telecom service providers such as Deutsche Telekom, Proximus and Orange.

The major change is that we’re now working for drone service providers. So we’ve built our solution with their needs in mind: it’s a platform for drone service providers that want to satisfy a multitude of use cases. From day

A Nokia drone on a mapping mission. (Image courtesy: Nokia)

one, we have been thinking about not only BVLOS operations but also superior connectivity.

What does this look like in practice?

Well, you might have noticed that we changed our logo in 2023, and that was not just because we thought our old logo was outdated; the change also marked a strategic transition for Nokia from being a classical telco company that was very much

really combine the strength of our network products on the one hand, and also the strength of our applications, such as the drones on top of the network, on the other. Although a lot of progress has been made, the UAV industry is still in its early stages and is largely dominated by consumer products, with DJI leading the market. At Nokia, our goal is to help mature the industry by leveraging advanced communication technologies like 4G and 5G within the drone

“It’s clear that industrial-grade drones require industrial solutions, not hobbyist kits”

oriented towards telco service providers, to being a B2B-oriented service provider. In practice, we actively approach relevant businesses out there. We work together with big mining enterprises, and with the big oil & gas companies. We have a chance to position our product – our drones –directly to the enterprises. That helps us to

ecosystem. We want to make it clear that industrial-grade drones require industrial solutions, not hobbyist kits. We’ve developed a comprehensive, turnkey ‘drone-in-a-box’ product that includes everything from the drone, docking station and parachute, to various ecosystem payloads and software. Our solution is designed for seamless

integration into existing IT environments with cloud-based services and open APIs. For customers needing connectivity – think of large mines where our communication infrastructure is already in place – we can offer a cohesive package that combines our technology with our drones. This integrated approach not only simplifies operations, but also enhances the overall efficiency and effectiveness of the solutions we provide.

You’ve established a number of partnerships with companies like YellowScan. What is the significance of a collaborative approach?

Our key aim is to provide a data collection platform. Obviously, we are not experts at building laser scanners or any other kind of measurement equipment. We know from experience that you need an ecosystem around you in order to satisfy highly demanding customers with their specific needs. Additionally, I’ve recently observed that some of the leading companies in the photogrammetry industry are hinting that the technology might be approaching its peak, especially when it comes to creating 3D point clouds. That signals that we’re on the brink of needing something more advanced, which is logical in view of the never-ending rise in mapping our environment all over the globe.

I see Lidar as the logical progression in this field. Lidar technology stands out, particularly when we consider the stringent industrial standards and specific requirements our clients have. It allows us to confidently say that our point clouds will meet those standards because, as your readers know, Lidar is incredibly reliable and offers a level of predictability that photogrammetry sometimes lacks. Take, for example, projects involving cell towers or the installation of equipment on such wireless communication structures. Precision is everything in these cases – down to 0.5 degrees or centimetre-level accuracy. Those are areas where traditional photogrammetry might not always hit the mark. YellowScan is an exciting example because it is a very agile and extremely capable Lidar player with a product portfolio that gives it an extremely powerful standing on the drone market. We see that many of our existing customers are also interested in YellowScan products. Many people with inspection use cases in

Thomas Eder is leading the rollout of Nokia Drone Networks and mission-critical private 5G.

mining and oil & gas already know YellowScan, and otherwise have got to know them. All those people knew Nokia for our communication infrastructure, and now for our drones too. Another big plus is that YellowScan has a brilliant corporate culture, and for a relatively small company it has an extremely powerful worldwide distributor network.

You mentioned your drone-in-a-box solution. How would you pitch that to industry professionals?

With our 5G-connected drone solution, we are able to significantly improve repetitive data collection and minimize the amount of effort required to collect the data. In contrast to what often happens today, we believe that it should not be necessary for drone pilots and data processing experts to actually be in the mine or in the field during a survey mission. Instead, they should be able to sit in their office – perhaps at home, perhaps even on the other side of the planet – controlling the drone and processing the data in real time. This is supported by our advanced BVLOS capabilities (beyond visual line of sight, Ed.) enabled by the 4G/LTE or 5G connectivity. This has been integrated from day one, allowing data collection to be done

remotely instead of repeatedly going on site. This significantly improves worker safety, as drones can be used to inspect areas that may be difficult or dangerous for employees to access.

Additionally, this reduces travel costs, because instead of sending a whole team of experts, the drone-in-a-box solution enables you to send just one or two to position the drone, get it ready to fly and do any maintenance. Moreover, reduced travel means less CO2, so in that respect it supports environmental sustainability. Additionally, we reduce the huge time gap between when the mission finishes and the data processing starts, because as you are conducting the mission, the data is immediately stored in your cloud or in your PC. So all the data, no matter whether it is telemetry or payload data, is always available in real time, and that is thanks to the 4G/5G connectivity. We remove the whole rhapsody with USB dongles or SD cards. Therefore, our solution is certainly of interest for land surveyors and other geospatial professionals – not only for exploring terrain for applications such as mining, but also in agriculture for crop monitoring, infrastructure, and domains like land survey, land planning and land use, for example.

What about the payloads?

Land surveys are often composed of multiple data layers. You begin by mapping the area using photogrammetry with a high-resolution camera, such as a 61MP sensor. Then, you use a thermal camera to capture the thermal signature of the perimeter, identifying hotspots. Then as a next step, a 3D point cloud is generated using a scanner. Finally, if needed, you gather wireless spectrum data, which is important for establishing radio coverage or additional frequencies. In our solution, all these data layers can be collected using a single drone. And a key part of our solution is that you can easily change the payload; it takes just 5-10 seconds. This simplifies data collection and allows users to gather multiple data layers. This flexibility also enables us to tailor the solution to specific customer needs.

Beyond hardware, the drone can be used with custom software, like Rohde & Schwarz for radio data collection. We’re building an ecosystem similar to the iPhone’s. In effect, the iPhone serves as a platform with

Nokia Drone Networks is a complete 5G-connected UAV solution. Pictured here is the docking station, enabling 24/7 automated remote operations and charging. (Image courtesy: Nokia)
A YellowScan Surveyor Ultra Lidar system mounted on a Nokia drone. (Image courtesy: YellowScan)

basic functions like calling and texting, but becomes much more powerful with apps for things like messaging, photography and email. Similarly, our Nokia Drone Network acts as a platform. Users can enhance its capabilities by adding software, much like downloading apps from the App Store. This ‘platform thinking’ allows for a versatile and expandable solution that adapts to various use cases, which is in line with our Mission Critical Industrial Edge (MXIE) strategy.

Your company often describes Nokia drones as ‘not only your eyes in the sky, but also your grey matter’. What is meant by this, exactly?

Firstly, we think about the drone, of course. 75% of the use cases can be covered with a video camera. From our collaboration with Airbus Defence & Space, we have extensive experience in the video surveillance domain. In effect, what we’re offering is flying 4G/5Gbased video surveillance. But this goes much further than being just an eye in the sky.

The grey matter we refer to is like a part of the central nervous system: the thing that enables your body, your brain, to perceive data from all the senses you have. Similarly, we want to make the drone a part of the central nervous system of an industrial site, of a mine, of an oil & gas plant, or of whatever you are surveying. This brings me back to importance of having multiple layers of data. For example, when collecting environmental data, even seemingly routine information like barometric pressure,

temperature and wind conditions can be valuable. The drone constantly monitors these factors as part of its flight control system, because they are not only essential for its operation but potentially useful for other purposes as well. So instead of installing individual weather and wind sensors throughout a site such as a mine, oil & gas plant or rugged terrain, why not leverage the drone flights that are being performed three times a day? By collecting this data during regular flights and storing it in a centralized data lake, it can be made available to the entire company. Integrating it into the industrial environment maximizes the utility of the drone’s data. Essentially, it becomes a vital part of the corporate nervous system, connecting and enhancing the flow of information.

First we had 4G, now we have 5G, and we are gradually moving towards 6G, although that will still take a while. Looking to the future, which developments do you foresee in the next couple of years?

I would say the trend in telecommunications is towards more intelligent networks, especially with 5G positioning enabling application-aware networks. Referring again to the Apple ecosystem: just as devices like iPhones and MacBooks are aware of each other and seamlessly share features, our Nokia Drone Network solution exemplifies this interaction. When paired with a Nokia network, our drones unlock advanced, attractive capabilities, making the network

more powerful. While 6G is on the horizon, 5G advanced serves as an important bridge to that more distant future. Drone-enabled 5G advanced networks are central to Nokia’s strategy, and should be seen as absolutely vital for both future network capabilities and enterprise customers who can use drones for a multitude of applications. The recent Deutsche Telekom video on Nokia Drone Networks, highlights how network APIs and the 5G-enabled Internet of Things (IoT) create innovative, forward-thinking use cases. This demonstrates the crucial role drones play in our evolving network landscape.

Lastly, what excites you most about Nokia’s involvement in the drone industry?

We are extremely proud that Nokia has entered the drone industry, as we think we can make a big difference with our solutions. We are eager to collaborate with the drone ecosystem and with the right industrial partners. The drone sector is still such a young industry, and that offers a lot of chances for a newcomer like us, with a different background. We are convinced we have a lot to add to this ecosystem.

A fleet of Nokia drones can be controlled from a single remote operation centre. (Image courtesy: Nokia)

Digital construction: insights into the PIX4Dcatch smartphone scanning tool

PIX4Dcatch is an easy-to-use mobile 3D scanning and augmented reality (AR) visualization tool used by architecture, engineering and construction (AEC) professionals. It combines photogrammetry and Lidar technology with RTK positioning for high accuracy. In this interview,

Robert Greenhalgh from Ingenieurbüro Daeges shares his insights and experiences with this innovative tool, including its practical applications and the impact of new AR features on projects.

Ingenieurbüro Daeges is an engineering and planning firm specializing in constructing fibre-optic broadband networks throughout southern Germany. Every year, the company measures and documents around 100km in the fibre-optic area alone. Accurate documentation with high GIS requirements is a crucial part of the firm’s projects. Meeting these requirements entails meticulous surveying of installed pipelines, connection points and associated physical infrastructure essential for network maintenance.

Daeges’ workflow includes documenting trenches before closure, an essential step to capturing the details of the installed pipe network and monitoring the construction progress. The firm’s operations span southern Germany and, with demanding project schedules, it is often impractical for a construction manager or surveyor to always be on site. The PIX4Dcatch mobile scanning tool offers a solution, ensuring thorough documentation even in

their absence. This mobile scanning device has allowed Daeges to empower on-site staff to efficiently document infrastructure on th e go according to Robert Greenhalgh from Daeges.

What is your professional background and typical projects?

I have worked in the 3D space since 2007, initially in software development & support for a leading laser scanning manufacturer. For the past five years I have worked in more traditional survey ing for road planning, utility mapping, and fibre to the ‘x’ (FTTx) network documentation. At Ingenieurbüro Daeges, we plan and oversee construction, and document FTTx projects across southern Germany. We are also active in road and utility planning.

How long have you been using PIX4Dcatch?

We have been using PIX4Dcatch on projects for over a year, with testing beforehand to establish the capabilities. We currently have 21 mobile devices with PIX4Dcatch – some are used by our team

Figure 1: PIX4Dcatch uses AR visualization to view previously scanned trench utilities under concrete.

directly to support our survey projects, others with the FTTx construction teams to document the built network in real time. The subsurface infrastructure can be scanned easily by on-site staff and then uploaded to PIX4Dcloud, where it’s automatically processed and then accessed by the surveyor at the office.

Can you tell us more about the new AR functionalities, such as AR Points?

I have tested all the AR functionalities with a range of survey and CAD data. I tested AR Points with property boundary points and points based on utility plans. While there is some manual work involved which can take time, the feature offers benefits in terms of

understanding the spatial interactions at a specific location.

How about the AR Overlay feature?

The AR overlay feature enables you to project and view georeferenced DXF, IFC or 3D models in real time on the

construction site using augmented reality. I’ve used the AR overlay feature to integrate DXF and IFC models into realworld utility documentation and home construction projects. We are actively discussing improving the placement accuracy with the development team, as we believe this tool holds great potential, especially with the improvement of placement stability. Within Daeges, we’re planning to equip our teams with RTKenhanced devices and PIX4Dcatch, linked to a project containing property boundary information and fibre-to-the-home (FTTH) planning drawings. This will allow our site teams to interact with the design and other spatial information in real time. This will be a huge advantage throughout our design and construction processes, as design revisions will be immediately available on site and in true space.

What specific challenges does the AR functionality of PIX4Dcatch address?

We see this as the solution for visualizing future infrastructure on-site without needing expensive graphic design work, or to show property boundaries without a surveyor needing to be on site. These AR tools can streamline the way we work and maximize the time on site by bringing clarity from the office desktop into the construction environment.

In your opinion, what is the impact of mobile or smartphone scanning for the AEC industry?

Mobile-based measurement has changed how we document construction of our FTTx projects, and we are testing it on a range of other survey tasks including mapping of wastewater networks. The key aspect is that it allows a wider range of users to take measurements, removing the requirement for survey personnel to be on site. The direct upload and automated processing are also important as they reduce manual steps in the process.

MORE INFORMATION

https://www.pix4d.com/product/pix4dcatch/ https://www.pix4d.com/blog/Lidar-photogrammetry-point-cloud-comparison/ https://www.pix4d.com/product/rtk/ https://www.pix4d.com/blog/usability-accuracy-PIX4Dcatch-mobile-scanning/ https://www.youtube.com/watch?v=2M8xs7tW6aI https://www.pix4d.com/blog/mobile-3D-scanning-digital-construction/

Figure 2: Digital 3D view of the subsurface infrastructure that has been scanned with PIX4Dcatch and processed in PIX4Dcloud.
Figure 3: The AR Points feature in PIX4Dcatch shows intuitive visual 3D markers in augmented reality.

How early explorers and cartographers documented the world without satellite GPS

Charting the unknown

Ever since the Age of Exploration, surveying has played a crucial role in the mapping of our world. This article delves into the captivating history of surveying at a time when GPS did not exist. When early explorers set sail to discover new lands, which surveying techniques were used and which challenges were overcome to help them navigate unfamiliar territories and create detailed maps?

Early explorers faced numerous challenges in their expeditions – from diseases to mutinies – and they had to tackle them without modern technologies that we often take for granted today. Navigation and mapping were particularly arduous tasks, relying heavily on rudimentary instruments, celestial navigation and dead reckoning. One of the primary challenges was to determine precise locations. Without GPS, explorers had to use tools like astrolabes, sextants and compasses to determine their position using the sun and stars. This method was heavily dependent on two factors: clear skies and a highly skilled navigator.

In the 1500s, determining the time of day was one of the main challenges that came with navigation, making it difficult to effectively determine longitude. Consequently, longitudes were often broad guesses, leading to inaccuracies in charts and therefore navigational errors. This challenge persisted until the creation of the marine chronometer in the 18th century. Early cartography was an art form. Cartographers translated explorers’ accounts of their travels into stunning images, often intended for publication in books and newspapers to inform the general population of the adventures. However, the harsh conditions and challenges faced by the explorers who provided the input meant that many of these maps were inaccurate and often speculative.

Early surveying techniques

The Age of Exploration (from the late 15th century to the 17th century) marked a significant period in history when continents united, and Europeans established routes to previously unknown worlds. One of the

was a staple of navigation in the Age of Discovery, providing a means to determine longitude and latitude.

Figure 1: Most historic maps were created as artworks for the public. This world map from 1635 is a beautiful example of the craftsmanship that went into the making of such maps, as well as the inaccuracies and assumptions presented.
Figure 2: The iconic sextant

main motives for these expeditions was the establishment of new trade routes, which meant that accurate mapping and surveying became essential for success. One of the first and most crucial tools used by early explorers was the compass. This provided a raw sense of direction, which had proven crucial in explorative navigation. To measure angles, explorers used instruments like astrolabes and cross-staffs. Astrolabes, dating back to ancient times, were useful for determining the altitude of celestial bodies. By measuring the angle between the horizon and the sun or stars, explorers could estimate their latitude. Cross-staffs were used to measure angles between distant objects on land, aiding in the creation of basic maps and charts.

Triangulation also played a key role in early surveying. Explorers and surveyors would establish a baseline between two points and then measure the angles to a third point from each end of the baseline. By applying trigonometric calculations, they could accurately determine the distances and positions of unknown points relative to the baseline. This process was repeated across vast areas to create a network of interconnected triangles, allowing for the accurate mapping of landscapes

Surveying timeline

and coastlines. Triangulation provided a systematic and reliable method for surveyors to gather data and create detailed

Technological advancements throughout the centuries have transformed surveying from a manual, labour-intensive process into a highly precise and efficient field that leverages electronic and satellite technology. Today’s surveyors can achieve levels of accuracy and efficiency that were unimaginable in the past, and the industry continues to evolve with ongoing technological innovations.

12th century: First use of the magnetic compass in mining surveying in Harz, Germany.

16th century: The plane table was invented by Gemma Frisius around 1530, and became widely known through J. Praetonius in the late 16th century.

17th century: Instruments like quadrants and astrolabiums, used for angle measurements, were notably improved by Snellius (in 1615) and Picard (in 1669-70). Picard also introduced telescopic sights allowing for more accurate angle measurements.

18th and 19th centuries: After evolving from earlier instruments, significant improvements were made to theodolites, including the introduction of telescopic sights and more precise angle measurement.

maps, enabling explorers to effectively communicate the raw characteristics of the terrain back to their countries.

1947: Development of the geodimeter, an electronic distance measurement (EDM) instrument that used light waves to measure distances accurately over long ranges.

1957: Development of the tellurometer, another EDM device that used microwaves for distance measurement, further changing surveying from triangulation to trilateration.

1970s: This decade saw a number of key developments, including:

- The electronic tacheometer, an instrument that combined the functions of a theodolite and an EDM, allowing for rapid measurements of angles and distances.

- Doppler satellite receivers, which utilized the Doppler effect from satellites to determine positions on Earth, enhancing the accuracy of location-based measurements.

- Inertial surveying systems, which used gyroscopes and accelerometers to determine position without the need for external references (particularly useful in areas without clear sightlines).

- The launch of the first global positioning system (GPS) satellite in 1978 marked the beginning of satellite-based positioning, which has revolutionized surveying by providing precise location data anywhere on Earth.

Figure 3: Created in 1727, this beautiful map of Japan shows the distorted visualization of what we now know to be the true shape of the current island.

Technological advancements

During the Age of Enlightenment (in the 17th and 18th centuries), the field of surveying experienced a series of developments that enhanced the precision and efficiency of cartography and navigation. One of the most notable innovations was the enhancement of the theodolite, which evolved from simpler devices like the plane circle. Originating from the work of the wellknown 16th-century English mathematician and surveyor Leonard Digges, who described it as the “instrumentum topographicum”, the theodolite was perfected during the Enlightenment to measure angles with high precision in both horizontal and vertical planes. The theodolite is equipped with a telescopic sight for targeting, and graduated circles for angle measurement, allowing for the precise determination of distances and elevations. The integration of clamps and slow-motion screws further improved the theodolite’s functionality, enabling fine adjustments and stable angle measurements. The improvements gradually made the instrument more practical and efficient for navigators and surveyors to take with them in their expeditions.

Another critical advancement was the widespread adoption of the sextant. The sextant’s design was improved with a movable arm (the index arm) and a scale (the arc) graduated to fractions of a degree. This provided navigators with a reliable means of determining their position, even in adverse weather conditions. Like today, levels were an integral part of the surveyor’s toolkit, serving to establish horizontal lines and ensure accuracy in measurements.

About the author

Lars Langhorst (MSc) has a background in geomatics, and is now focusing on digitalization in the world of asset monitoring. At Sweco, Lars is working on digital solutions for the surveying industry, aimed at using data to help engineers make every decision a data-driven decision.

Figure 4: Antarctica long remained one of the last mysteries on the world map, both due to its inaccessibility and its low profitability for explorers.

The life of a surveyor in the 1700s

Equipped with essential tools and a thirst for discovery, countless surveyors set out to map uncharted territories in the 1700s. In the days before satellite and GPS, these were the techniques and instruments they used:

The sextant

First, to establish their current position, surveyors used a sextant to measure the angle between a celestial object (like the sun or a star) and the horizon. This enabled them to calculate their latitude and longitude. Reference points and theodolites

Next, the surveyors focused on establishing reference points. They employed a theodolite (a telescope mounted on a tripod) to measure both horizontal and vertical angles. By sighting reference points and noting their angles, they created a network of interconnected points. These reference points formed the foundation for their survey work.

Gunter’s chain

Surveyors ventured into the field armed with a metal chain. The so-called Gunter’s chain was exactly 22 yards (approx. 20m) long with 100 links. This gave the measuring of distances an element of consistency, besides being more reliable and durable than rope. Distances between the reference points created with the theodolite were accurately measured and documented.

Levels for precision

To account for slopes and uneven terrain, surveyors used levels. The levels were then often used with other instruments to gauge height differences over longer distances. By taking readings with the level, they adjusted measurements to ensure the highest possible accuracy.

These devices consisted of a spirit level that housed a liquid-filled tube with an air bubble, with a level surface indicated by the bubble in the centre of the tube. Surveyors would position the level at various points along their surveying route, carefully adjusting its orientation to account for any slopes or variations in the terrain. By using levels in conjunction with other instruments, such as theodolites and chains, surveyors could create a consistent reference line for their measurements. The surveyor would position the level at a known point and align it horizontally, from which angles and height differences could be determined.

The Age of Enlightenment also saw significant advancements in the methodologies of triangulation and geodesy. Geodesists engaged in determining the Earth’s exact shape and size, which was vital for the development of accurate map projections and for understanding the planet’s physical characteristics. The efforts to resolve the debate between the Newtonians and the Cassinians over the Earth’s shape, through expeditions like the ones to Lapland and Peru, exemplify the period’s dedication to empirical, geodetic research. These technological and methodological advancements during the Age of Enlightenment had a profound and lasting impact on the field of surveying. The period’s contributions to surveying laid the foundations for future developments (many of which did not occur

until the 20th century), setting the stage for modern practices. As a result, today’s surveyors can depend on detailed and accurate maps that form the basis of contemporary navigation and geographic information systems (GIS), and continue to shape our understanding of the world’s geography.

Conclusion

The exploration and mapping of the unknown world during the Age of Exploration was a remarkable feat accomplished by early explorers despite numerous challenges. The absence of modern technologies such as GPS, accurate timekeeping, reliable maps, and instant communication made their journeys incredibly perilous. They relied on basic tools and instruments like compasses, astrolabes and cross-staffs to aid their surveying efforts. As the Age of Enlightenment brought technological advancements such as the refinement of the theodolite, adoption of the sextant and progress in triangulation and geodesy, it revolutionized surveying and enhanced the accuracy of mapping. The legacy of these early explorers and their technological advancements is evident in the detailed and accurate mapswerdepend on today.

Further reading

Holsen, J. (2015), ‘The Development of Survey Instruments’, The International Hydrographic Review, 61(1). Available at: https:// journals.lib.unb.ca/index.php/ihr/article/view/23512 (Accessed: 30 June 2024).

Smith J.,De Graeva J. (2010), ‘History of Surveying’, https://fig. net/resources/publications/figpub/pub50/figpub50.pdf

Figure 5: A high-end theodolite, one of the ancestors of today’s widely used total station. (Image courtesy: Royal Museums Greenwich)

From hand-drawn maps to digital twins

Digital twins are not just about the technology; they are also about the transformative impact they can achieve. Nevertheless, the technological aspects offer numerous opportunities for the geospatial industry to explore. Here, GIM International’s Wim van Wegen imagines the potential of integrating reality mapping meshes, building information modelling (BIM), Lidar, thermal and hyperspectral mapping, Earth observation data, bathymetry, 3D objects and other geospatial data into a unified system.

As a child, I often drew maps of fantasy worlds, with jagged coastlines, winding roads and creatively named cities. It was my way of bringing imaginary places to life. When I was drawing those maps with felt-tip pens, probably with my tongue sticking out in concentration, little did I know that this playful creativity would not only later resonate with one of today’s most groundbreaking technologies in the geospatial sector – digital twins – but that I would also be working in that sector. Digital twins visualize our world – although it’s safe to say that they do so on a ‘slightly’ more advanced level than my maps did! Whereas my maps were static, digital twins are constantly in motion, integrating realtime data to create a dynamic, living map that allows us to simulate change and make predictions. But before we dive deeper into this topic, let’s start with the basics: what actually defines a digital twin?

The evolution of digital twins

In a nutshell, a digital twin is a dynamic virtual model that mirrors real-world objects, processes and interactions. When anchored in geographic data, it evolves into a geospatial digital twin, offering rich context and seamless integration of high-resolution data to empower more informed business decisions. As with any advancement, digital twins are not immune to hype. So to get a clearer picture of where things really stand right now, and where the technology and applications of digital twins are heading, it is worthwhile to take a closer look at the journey so far. The concept of digital twin technology can be traced back to 1993,

The cover of Mirror Worlds, in which David Gelernter first introduced the idea of the digital twin. (Image courtesy: Oxford University Press)

when David Gelernter introduced the idea in his book Mirror Worlds. However, it was Michael Grieves, then at the University of Michigan, who first applied the concept to manufacturing in 2002, formally defining the digital twin framework.

NASA’s John Vickers later popularized the term ‘digital twin’ in 2010. But interestingly, the core idea of digital twins was in use long before these formalizations. NASA actually pioneered the concept during the 1960s, creating exact replicas of spacecraft on Earth that were used for study and simulation

throughout space missions. This laid the groundwork for what we now recognize as digital twin technology.

Moore’s Law

The technology has come a long way since the early days, evolving rapidly as it tends to do. The rise of the Internet of Things (IoT) has connected real-world sensors attached to physical objects with the vast network of the internet. Meanwhile, computing power has continued to roughly double every year, as predicted by Moore’s Law. The cloud –now more powerful and widely used than ever – enables the creators of digital twins to easily scale up their models or generate multiple versions for testing, without the need for massive hardware investments. While Moore’s Law in its original form may be reaching its limits, the spirit of innovation it represents continues. The rapid increase in computing power is worthy of extra attention, as this continues to open new doors for large-scale digital twin creation and application thanks to significantly enhancing the potential of the mind-blowing amounts of geospatial data involved. The adoption of artificial intelligence (AI) in combination with the advancement of cloud technology means that digital twins can now process and analyse vast volumes of geodata in real time, providing a more accurate and dynamic representation of physical environments. This integration of high-performance computing and geospatial information enables digital twins to offer deeper insights and predictive analytics, driving innovation across sectors such as urban planning and infrastructure management.

Embracing the reality

In recent years, the metaverse has been at the forefront of discussions that have often verged on hype. In fact, one could argue that the whole concept has more style than substance, in effect branding an immersive digital experience that didn’t really need a new name. But now that the tech industry is shifting its focus away from the hype, attention is turning to the underlying technologies… such as digital twins. The main reason that digital twins haven’t sparked the same level of hype as the metaverse is that geospatial professionals have embraced the digital twin concept as a key output of the large amounts of geospatial data they gather. Thanks to their involvement in its evolution, with the technology already

being implemented and further developed across various industries, users are keenly aware of both its potential and its limitations. So what do all the digital twin-related advancements mean for the geospatial sector? Rather than rendering land surveyors obsolete, these innovations are fundamentally transforming and expanding their roles. Digital twins go far beyond traditional 3D visualizations or interactive maps; they are revolutionizing the way we approach data collection. With the integration of vast networks of sensors providing real-time data, the volume, speed and accuracy of information gathering have reached previously unimaginable levels compared with traditional methods. Thanks to the insights that digital twins deliver, they are reshaping the entire landscape of data-driven decision-making.

Redefining the surveyor’s contribution

However, this shift doesn’t diminish the importance of surveyors. On the contrary, digital twins redefine their contributions in this new landscape, empowering surveyors to lead the charge in managing complex data environments. Despite everything that is going on in remote and autonomous data capture, human expertise is vital for ensuring the accuracy, reliability and effective integration of geospatial data, which forms the backbone of these advanced systems. As a result, surveyors are now evolving into geospatial data engineers, taking on crucial roles in interpreting the vast amounts of data that serve as the foundation of digital twins. As technologies like BIM and reality modelling continue to be adopted by multiple industries – from construction and infrastructure to cultural heritage, agriculture and mining – surveyors are uniquely positioned to guide digital transformations by ensuring that these tools are not just implemented, but also optimized for maximum impact to drive innovation across the geospatial sector.

Although I no longer draw maps myself, the interest has endured. I am still a great lover of maps, especially old ones, and I am fascinated by how the environment is mapped. In that respect, I consider myself lucky that I have ended up working in such an inspiring environment. The maps I drew as a child were admittedly low-tech, but the advanced technology involved in the visualization and virtual representation of reality today surpasses anything I could ever have conjured up in my imagination!

In this example of a digital twin in practice, Boston’s digital twin, enhanced by ArcGIS Urban, uses data to visualize and evaluate the impact of urban development projects on housing, zoning and parking within neighbourhoods. (Image courtesy: Esri)

Undated photo of Gordon Moore during his time at Intel. (Image courtesy: Intel)

Mapping lava flows during volcanic eruptions in Iceland

Rapid aerial photogrammetry with high-precision GNSS georeferencing enables experts to precisely monitor eruptions even under extreme conditions, helping to keep people safe.

The crater row of Sundhnjúksgígar eruption seen from the window of the aircraft on 20 March 2024.

Iceland is a hotbed of volcanic and seismic activity, so the country’s geoscientists need to be constantly aware of volcanic unrest and react quickly to new developments by mapping environmental changes and lava flows. This is one of the roles of the Icelandic Institute of Natural History (IINH), which runs a photogrammetry lab in collaboration with leading institutes in geohazards in Iceland. Since speed and safety are critical, the photogrammetry team prefer minimizing the number of people on the ground by using a small aeroplane over drones. Facing extreme conditions from volcanic plumes and adverse weather, their flight plans need to be flexible enough to work with what is possible in the air at each moment. Once the aircraft is safely back on the ground, the data then needs to be processed quickly and sent onwards for use by other agencies and actors, including by the Department of Civil Protection.

The eruptions near to Grindavík

There was a change in volcanic activity in December 2023, when a dramatic new fissure eruption began just north of the town of Grindavík, situated on the Reykjanes peninsula southwest of Reykjavík. This activity on the peninsula was preceded by three other eruptions in Fagradalsfjall, another volcano nearby, that started in 2021. Since then, the inhabitants of Grindavík have had to battle the forces of nature in ways no one wants to experience, including numerous earthquakes and eventually lava flows that reached the town. The task of mapping these lava flows is fulfilled by the photogrammetry lab of the IINH, directed by Birgir V. Óskarsson. “The area is very active now. We have had seven eruptions since 2021,” says Óskarsson. “We first had tectonic activity as precursors and then a series of eruptions as the ones above Grindavík.” During the volcanic unrest, the town subsided by more

The photogrammetry lab of the Icelandic Institute of Natural History:

• creates 3D models for photogeology

• conducts surveys of active environments such as volcanoes, landslides and glaciers

• monitors geoheritage sites

• monitors Surtsey island

• provides open access to data

than one metre with the activation of faults, resulting in nearly 4,000 people being displaced and a lot of damage to houses and buildings from fire and subsidence. However, the people of Iceland are determined to fight against the forces of nature and are keen to protect their homes and businesses – but it is hard to predict when it will be safe for them to return.

Photogrammetry reveals trends and hidden dangers

High-resolution nadir imagery is essential in monitoring the hazards and allows the photogrammetry team to generate digital elevation models (DEMs), orthoimages and 3D mesh models, all to follow the progress and evolution of the lava field. By accurately

mapping the flow and measuring the area and volume of the lava, the authorities can better mitigate the hazards and predict the course of the flows. They use a DEM to calculate the volumes erupted and estimate the lava effusion rate. By comparing two DEMs, the team can also gain insights into changes in the lava fields and can get a clearer idea of what is happening below the crust of the lava flows – which can inflate if the flow is obstructed, for example. “It is usually hard to see,” says Óskarsson. “But with these DEMs we can identify areas that are inflating and becoming dangerous. Because they can suddenly breakout, causing a sudden surge of lava, and if a lot of people are around that could mean fatalities.”

Image showing the size of the lava field of Sundhnjúksgígar on 8 April 2024.

Specifying a new aerial system

As the photogrammetry lab of the IINH provides data used by the Civil Protection and others working with hazard assessment, its image collection and photogrammetric processing operations need to be extremely efficient. Previously, the IINH has used standalone cameras for volcano monitoring, and manually worked with the data. But without a proper aerial system, the processing was tedious. “It was crucial for us to acquire a full aerial system with the camera placed in a stabilizer, with a controller, flight management system, highprecision GNSS and inertial measurement unit (IMU): one proper system that plans the flight lines, with the correct number of images and overlap. That saves a lot of time on planning and processing the data,” Óskarsson explains.

The IINH photogrammetry team uses aeroplanes with medium-format cameras because they can cover larger areas in a much shorter time than drones. Aeroplanes can also fly high, usually safely above the air traffic such as drones, helicopters and smaller planes. Moreover, drones need to be deployed from the ground, which is not always feasible in volcanic situations when drone operators would need to be located near dangerous hazards. “Our motto is

‘light and fast’; we have built our surveys around lightweight, medium-format cameras, meaning we can quickly install the system in the aircraft during an emergency and move towards the targeted area,” adds Óskarsson.

Difficult conditions require flexibility to get results

When dealing with volcanic eruptions, flight plans with overlapping corridor patterns do not always stand up to the real-world challenges of volcanic plumes and associated turbulence. The surveyors often have to wait until it becomes possible to fly a line, and the plans often need to be improvised. When it is possible, the aim is to fly high in order to capture the area with the minimum number of images. But very often the Icelandic weather, with many cloudy and windy days, only allows for low flights at limited altitude. Consequently, taking oblique imagery is also part of the team’s toolkit. Having all images georeferenced with precise coordinates is a powerful advantage for Óskarsson and his team because it significantly reduces the need for people to be on the ground placing numerous physical control points around the lava flow. Ground control points are also frequently overrun by the lava and destroyed, requiring the team to replace them.

The Phase One solution

To meet the needs of the photogrammetry team, Phase One specified a PAS system. It comprises an iXM 100MP camera mounted in a small system frame, which is then

Mission planning with iX Plan.
Eruption in March 2024.

mounted in a small SOMAG stabilizer. The camera is controlled b y the iX MK5 controller with integrated Applanix GNSS and inertia l system. The Phase One software iX Flight Pro is also an essenti al element, resulting in an aerial system that is highly efficient and enables the team to cover an area in just a few minutes. “We chose Phase One because it is just on another level,” says Óskarsson. “Phase One is leading the market when it comes to aerial systems for medium-format cameras. We considered other suppliers, and some had most of the components, but none had all of them integrated nicely into one aerial system.”

He continues: “We are extremely excited about the results from our surveys so far. We have flown in all kinds of conditions – e ven around the solstice, when the angle of the sun is just two degrees in Iceland. Companies do not usually offer surveys when the sun is below ten degrees, due to the challenges of such conditions. It was quite cloudy too, but we still managed to get the data we needed.”

Easy planning saves time and adapts to changes

The flight planning software iX Plan has brought considerable benefits to the operations. “It is a game changer to be able to plan your flight to the extent the software helps you plan and then fly it. It is so much easier for everybody: for the pilot, and f or us controlling the equipment to run the operation. It saves us a lot of stress and time,” Óskarsson says. The iX Plan software also hel ps them adapt their flight plans to the volatile reality in the ski es around volcanoes. It enables the team to obtain the correct image overlap despite these conditions, considerably reducing their processing time later. “Because of the challenges with following

the flight lines, I may need to do some improvisation, but the i X Flight Pro software allows for that. I can select a line that I want to fly by eyesight and then run the interval image capture. So I ca n do both: I can follow the lines and then improvise with other lines that I just make on the spot,” Óskarsson continues. Subsequently, th e iX Process software helps him save time by giving an overview o f the captured images that enables him to quickly select the images that he wants to process and piece together.

The Phase One solution enables the team to quickly deliver their results onwards to the many other agencies such as Civil Protection or companies planning rebuilding work who need to assess the dangers. The photogrammetry team sends data to Civil Protection within hours after a survey. The system helps to reduce processing time since it automates the image capture, and the stabilization compensates for the movements of the plane, ensuring images are all properly nadir and with good overlap.

Simplicity empowers more users

Making the surveying process easier has reduced the workload significantly. “In the past the flight lines were planned manuall y, but now we can plan them with iX Plan and even account for the fast topographic changes by integrating new DEMs from our surveys,” states Óskarsson. According to him, establishing a simplified process will also make it easier for other people to operate the system, such as when a long-lived eruption necessitates a larger team to rotate shifts. “It is very easy to plan low or high flights,” adds Óskarsson. “With Phase One’s system being user-friendly, we can train more people to run the operations.”

Ready for future projects

While the PAS system has become an essential tool for monitorin g volcanic eruptions in Iceland, the photogrammetry team also plans to use it for landslide monitoring, glacial monitoring, glacial flood surveillance, monitoring the uninhabited volcanic island of Surtsey, and more. “There are many other places up on the highlands and in glaciers that are very difficult to access, wher e it is almost impossible for us to place control points. So having this system with high-precision GNSS georeferencing will save us a lot of headaches and work,” Óskarsson says. Subglacial volcan ic activity can produce meltwater in vast quantities and cause dangerous floods. “We have had floods the size of the Amazon river, with 200,000m 3 per second of flow when volcanoes erupt under glaciers,” says Óskarsson. “These cause a severe threat t o the impact areas and therefore need to be monitored effectively, ” he states. Therefore, the photogrammetry team is now planning a project to investigate surface changes in glaciers caused by geothermal anomalies, as these could act as potential precursors of such eruptions. It is hoped that the resulting insights will contribute to advance warnings and emergency management measures to help keep people safe.

Further reading

https://www.phaseone.com/inspiration/mapping-lava-flowsduring-volcanic-eruptions-in-iceland/

The coordinators of the photogrammetric surveys: Joaquín Belart from the National Land Survey of Iceland (left) and Birgir V. Óskarsson from the Icelandic Institute of Natural History (right).

Data formats can erode intellectual property of surveyors

Cadastral copyright in Australia in the digital age

Australian cadastral surveyors retain the intellectual property in their cadastral survey plans through the function of copyright. However, copyright is a legal protection that is being forced to play catch-up to technological change. A move from paper-based cadastral plans to structured cadastral data is one such change that draws into sharp focus the limitations of copyright as an intellectual property protection device. This article explores whether there is a better way to ensure that surveyors can get just reward for their hard work.

Since the introduction of the Torrens Title system in Australia in the mid-1800s, it has been necessary for the public to engage a cadastral surveyor to create almost all interests in land. The product of this survey, until recently, has been a paper plan (see Figure 1). To create the land interest, these plans are lodged with a registration authority. As the spatial extent of land interests are of general use to many professionals, the registering authority provides, for a fee, a copy of the surveyor’s plan. This arrangement had been a longstanding bone of contention with surveyors who, to do a good-quality survey, were forced to pay to get cadastral data that they may have produced in the first place. The status quo was upended by the 2008 case ‘Copyright Agency Ltd. v State of New South Wales’ [2008] HCA 35 (CAL case), when the Australian High Court determined that surveyors retained intellectual property in plans as they were artistic works protected by Commonwealth legislation. The registering

About the authors

Prof Glenn Campbell is an associate professor in surveying and land information at the University of Southern Queensland, Australia. He has been registered as a cadastral surveyor since 1998. After 15 years in private practice, he commenced work as an academic in 2004.

Dr Armin Agha Karimi is a lecturer in surveying and spatial science at the University of Southern Queensland. He holds a BSc in Surveying from the University of Tabriz-Iran, obtained in 2009. He received his MSc in Geodesy in 2012 and PhD in Geodetic Remote Sensing in 2019.

authorities were forced to enter into licensing agreements to continue to sell the surveyor’s intellectual property.

How does copyright work in Australia?

In Australia, intellectual property is under the control of the Federal Parliament. The Copyright Act 1968 gives the author ownership of the copyright subsisting in a “literary, dramatic, musical or artistic work”. While the judgement in the CAL case saw the surveyor’s plan as an artistic work, it could equally have received protection as a literary one. Maps, charts, plans, tables and compilations all fall under this term. In addition to the type of work, there are two other elements that are critical to the consideration of copyright: the material form, and authorship.

Copyright law protects the expression of ideas, not the idea itself, so the protection starts when the idea is fixed in some material form. While the law has been updated in response to technological change, it inevitably lags that change as it depends on the disputes that are eventually put before the courts. Precedents have now established that digital files like PDF/TIFF are sufficiently fixed. However, dynamic displays of maps and spatial data are most probably not, although it is still a somewhat open question.The current legislation only protects the original works produced by humans, presuming that only humans are capable of originality and creativity. That notion might need to be reconsidered soon, in view of the rapid advancements in generative artificial intelligence, for example. But, for the present, works such as phone directories that are created automatically from software do not deserve copyright protection. The key element for authorship appears to be the independent intellectual effort exercised by human agents in the work.

Machine-readable cadastral data

The Intergovernmental Committee on Surveying and Mapping (ICSM) released a national strategy for cadastral reform and innovation in Australia, titled ‘Cadastre 2034’. The strategy sees the possibility of reducing the time to create a land interest by using a seamless data

flow from field observation to the registered interest. The need for paper and duplication is negated by surveyors moving to a digital

database of survey plans in 3D. This vision, along with electronic conveyancing, has been the driving force behind the need for

cadastral data in a flexible and electronic format. The purported productivity gains come from putting the cadastral data in a form that is readily machine readable. Limitations in optical character recognition (OCR) mean that, even in an image format, it is very difficult to extract the essential boundary information from a traditional plan. ICSM created a national approach, ePlan, that uses a data model based on the international standard LandXML. In ePlan, the cadastral information is contained in a cadastral information file (CIF). Rather than being shown pictorially, the boundary information is held in a structured table (see Figure 2). However, while it is now entirely machine readable, the information can only be used by humans with the aid of specialist software. To service the non-surveying users of cadastral data, the registering authority uses an automated algorithm to render a pictorial representation. This is the object that has value to most users, not the CIF.

Survey plan copyright under ePlan

The CIF or any other format will almost definitely be considered worthy of copyright.

Figure 1: An excerpt from a Queensland cadastral plan.
Australia seen from space. (Image courtesy: Shutterstock)
Having copyright of an item of limited commercial value is a Pyrrhic victory

It is a literary work and a product of original productive thought in the selection of marks, boundaries or other cadastral data. No matter how it will be stored or transmitted to the registering authority, it will be in a fixed form. But it is not the CIF that has commercial value to the public, as it is unintelligible to them without specialist software. Having copyright of an item of limited commercial value is a Pyrrhic victory at best. The rendered plan is the item of

Acknowledgements

The authors would like to thank The Surveyors Trust (thesurveyorstrust. org.au) for funding this work and its dissemination.

Further reading

The Copyright Law of Spatial Data by Kariyawasam, Palliyaarachchi & Campbell

value to less technologically sophisticated users. Although the render is created from the CIF, it is created through computer code by way of a fixed, and almost certainly automatic, algorithm. Precedents in relation to authorship suggest that it is possible, if not probable, that the output will be considered authorless and hence not worthy of copyright protection. In the unlikely event that it does receive copyright protection, the copyright will not reside with the author of the CIF. In other words, the original surveyor – whose independent intellectual effort created the value – will miss out on the benefit of that effort.

Is copyright protection the best option?

Free-riding is a market failure caused by misalignment of the benefits from selling and the cost of producing. Copyright protection is a public policy intervention to redress this market failure. Copyright protection allows the author of a work a period in which to exploit their position as the sole person with the right to reproduce the work to recover the economic value they have produced. It is indeed one solution to the market failure, but not necessarily the only one. Copyright, as an intellectual property protection mechanism, has been shown to be vulnerable to the fast pace of technological change. Being

able to obtain financial gain from others’ intellectual and creative work without paying compensation would be commonly considered as dishonest. Competition that is contrary to honest practices in industrial and commercial practices has been defined as unfair competition by the World Intellectual Property Organization (WIPO). Unfair competition legislation exists in in Australia, but the Competition and Consumer Act 2010 focuses on protecting users rather than producers. However, legislation like this – which concentrates on the actions of market players, rather than on the technical description of the products – promises to be more responsive and technologically independent to protect intellectual property.

Conclusion

Since the CAL case in 2008, cadastral surveyors in Australia have been compensated whenever their intellectual property has been reproduced and sold to the public by registering authorities. There is a risk that the state and the surveying sector, in attempting to streamline the creation of land interests for the betterment of the community, may have inadvertently destroyed the value of that intellectual property for surveyors. Legislation protecting against unfair competition, rather than the current copyright regimes, may be the easiest long-term, technical fix to ensure that surveyors get just reward for their intellectual and creative work.

Figure 2: An excerpt from a cadastral information file (CIF).

The imperative for a sustainable profession

The FIG agenda 2023-2025

The FIG agenda for 2023-2025 is driven by the imperative for a sustainable profession that delivers services sustainably, addresses the global sustainable development agenda, and acts decisively on the climate agenda. Each Working Week under the current leadership of FIG builds on this overall agenda, culminating with the FIG Congress in 2026 that will take place in Cape Town, South Africa.

FIG commissions, networks and task forces contribute to this overall agenda, and especially the current four task forces for the term 2023-2026:

FIG and the Sustainable Development Goals

• FIG Climate Compass - The Surveying Profession’s Global Response to Tackling the Climate Agenda

The Role of FIG in International Trends and Future Geospatial Information Ecosystem

The Surveyor’s Profession: Evolutionary Diversity and Inclusion

The FIG Working Week 2025

The FIG Working Week 2025, which is to be held from 6-10 April 2025 in Brisbane, Australia, operates with the overall theme of ‘Championing a Digital Generation: Collaboration, Innovation and Resilience’. In recent years, several transformative forces have converged, compelling our profession

to redefine how we create, deliver and communicate value within the context of sustainability. Awareness of trends and megatrends fosters preparedness and is the first step towards resilience. However, the next and immediate step in this decade demands action. The digital age will propel us forward, underscoring the urgency to act now.

It is certainly important to champion what is unquestionably a digital generation. But for surveyors to remain relevant and maintain their international impact while providing services, surveyors and all related professions will need to be collaborative and innovative, and their actions must be sustainable in the face of the climate imperative.

Focus on the foundational issues

In the context of international trends, conference discussions will focus on the foundational issues across political, economic, social and technological environmental trends. Members of the survey and geospatial profession have a significant role to play in addressing these trends, and in particular the trends in:

• New technology: This will have a significant impact on the industry by providing tools, but also creates the need for global politics and regulation

FIG Working Week 2025 will be held in combination with the national yearly event Locate 2025, bringing together surveyors, geospatial experts and other related professions from all around Australia and the rest of the world.

The people factor: We must maintain our professional standards, competencies and appropriate skill, and recruitment and retention. Above all, to leave no one behind, we must ensure inclusivity, particularly valuing the wealth of knowledge of indigenous peoples Technological advancement: Work to good effect with data and geospatial ecosystem: what, who, how and why • Sustainable planet: The climate imperative forces society to change practices and build resilience

Sustainability is everybody’s business FIG is committed to helping its members and their national members create value for sustainability. The FIG Working Week 2025 aims to reframe how to approach sustainability and resilience, placing it at the centre of how value is created for people, society and the world.

Sustainability in the context of people is about ensuring the demonstration of equality, diversity and inclusion. As a profession, we need to ensure that our skills, training and development keep us relevant, particularly in relation to transformation and technology, across all our disciplines. This will aid in recruiting and retaining young professionals.

A key part of resilience is championing collaboration with indigenous communities, valuing their knowledge and leadership in resilience. Addressing societal and planetary megatrends requires partnership and collaboration. This cannot be done alone, and within FIG there is awareness that the contribution will be more meaningful by seeking to work in partnership.

The digital generational drivers

High on the agenda is to maintain and enhance the professional relevance for a wider societal benefit, recognizing the pivotal role that geospatial data plays as an enabler in land administration and the built and natural environments. This is in the context of the UN Sustainability Agenda, as well as international trends and the emerging geospatial information ecosystem. FIG’s deliberations will be unpacking the current and potential reach and the impact it can contribute to wider geospatial areas such as big data and artificial intelligence (AI). This is a broad agenda and FIG commissions will each embed this into their dedicated discussion platforms.

Abstract submission

Do you want to be part of the agenda for this important Working Week? You are encouraged to submit an abstract to be

The

FIG Working Week 2025 aims to reframe

how to approach sustainability and resilience, placing it at the centre of how value is created for people, society and the world

included in the technical programme and be an active part of a session. The abstract submission is open for both peer-review and regular abstracts/papers. Even if you don’t submit an abstract, your presence is important and relevant as you can actively

participate in the discussions, or simply just learn from the sessions, gaining knowledge that you can bring back home.

More information www.fig.net/fig2025

A glimpse of Brisbane, the venue for FIG Working Week 2025.

GSW 2025: leading the future of geospatial technology

Photogrammetry and remote sensing for a better tomorrow

The biennial ISPRS Geospatial Week (GSW) event unites global experts to delve into the latest advancements in geospatial science and technology. GSW 2025, scheduled to take place from 6-11 April 2025 in Dubai, is poised to revolutionize the geospatial field with its state-of-the-art workshops and activities.

Organized by the Mohammed Bin Rashid Space Centre (MBRSC), GSW 2025 aims to foster collaboration and innovation in geospatial technologies. With workshops from over 20 ISPRS working groups, the event promises to be a melting pot of ideas, research and technological advancements.

Diverse workshops and themes

Workshops at GSW 2025 will cover an extensive range of topics including: Smart forests and agriculture

• Intelligent uncrewed vehicles and mapping systems

AI for spatial data quality and uncertainty modelling in spatial analysis

• Remote sensing monitoring for urban environment

Vision metrology and uncertainty assessment

• Planetary remote sensing and mapping

• Digital construction

Laser scanning

• 3D sensing for smart cities

• Digital twins and open-source

empowered HD maps for smart mobility and autonomy

• Earth observation and geospatial artificial intelligence for disaster risk management Semantic scene analysis and 3D reconstruction from images and image sequences

Applications of open standards, IoT, crowdsourcing & intelligent systems for smart cities governance & digital twin implementation

Resilient high-precision positioning, navigation and guidance of autonomous vehicles

• Climate change and geospatial research: advanced geospatial research for a sustainable development through international cooperation

• Data management and data quality for remote sensing scenarios.

These workshops address current challenges and explore future possibilities in geospatial science.

The conference theme, ‘Photogrammetry and Remote Sensing for a Better Tomorrow’, resonates deeply with the collective mission of ISPRS. These fields are not merely scientific disciplines; they are transformative tools that enable us to view our world through new lenses, make data-driven decisions, and pave the way for a sustainable and inclusive future. By harnessing the

precision of photogrammetry, the insights from remote sensing, and the encompassing knowledge of spatial sciences, we unlock the potential to understand our planet’s history, manage its present and shape its future.

Expert-led sessions

Prominent researchers, professors and industry leaders will lead sessions on a wide array of topics, including point cloud processing, 3D data acquisition and deep learning applications. These sessions are designed to bridge the gap between theoretical research and practical applications, offering attendees valuable insights and cutting-edge knowledge. By focusing on the latest advancements in sensor and system calibration, data fusion techniques and the integration of geospatial technologies, these sessions provide a comprehensive understanding of the field’s current state and future directions.

More information www.gsw2025.ae

ICC: bringing together map lovers from around the world

Join the international cartographic community for maps, cartography and geospatial information science

The biennial International Cartographic Conferences (ICC) bring together the global community of map lovers – to share experiences and the latest research in cartography and GIScience, but also ‘just’ to enjoy the beauty and power of maps. It is an opportunity for networking and connecting with like-minded map enthusiasts, to inspire and be inspired by maps and the potential of geospatial data. An ICC attracts up to a thousand delegates from around the world.

Look into the future

The next ICC will be hosted by the Canadian Institute of Geomatics (CIG) from 16-22 August 2025 in Vancouver, British Columbia, Canada. It will be the third time that an ICC is hosted in Canada, following ICC 1972 and ICC 1999, both in Ottawa. Vancouver is the largest city in Western Canada, where

the fusion of urban sophistication, culture richness and natural beauty provides the perfect backdrop for a conference. While this will already be the 32nd ICC since the first one was hosted in 1962, the ICC 2025 conference theme, Mapping the future: innovation, inclusion and sustainability’, challenges us to look into the future.

Submission deadlines

Submissions of paper and abstracts are due on 2 and 9 December 2024, respectively. In addition to the conference topics related to ICA Commissions and Working Groups, ICC 2025 also invites submissions on themes of specific interest in a Canadian context: decolonial maps, indigenous mapmaking and mapping the Arctic. The organizers are looking forward to submissions on these topics and to lively discussions about them.

Maps and cartographic products from around the world will be showcased in the International Cartographic Exhibition at ICC 2025, along with finalists in the Barbara Petchenik children’s map competition. Technical tours allow delegates to experience first-hand how cartography and GIScience is practised in local organizations in Vancouver.

Commission activities

The ICA has a number of commissions, each focusing on a specific topic. Commissions arrange pre-conference workshops and meet during the conference to discuss their plans for the future. Some commissions arrange fun activities related to maps, some have more serious research presentations and discussions, and others engage with local communities. Commission events are generally a great way to meet people and make friends in the cartographic community.

Register an accompanying person

As a special feature of an ICC, if you register an accompanying person for the conference, they receive complementary entry to a technical session of their choice. This is a great opportunity to bring a friend, partner or family member to support you in your presentation, so that they can finally see what you actually do at a conference!

More information

https://icc2025.com/

Vancouver Convention Centre is the venue for ICC 2025.

SONOBOT 5

uncrewed surface vehicle

• Autonomous and remotely-controlled operation

• Exchangeable battery packs for 9+ hours of operation

• System software for planning, execution and data evaluation

• Accurate georeferenced bathymetry and cartography missions, search and survey in hard-to-reach areas

• High-precision measurements and recordings, different GNSS and sonar options available

• Data logging and real-time acquisition over redundant mesh network

• Rapid system deployment, excellent maneuverability and area coverage with powerful and efficient drives

• AI-based object recognition for side-scan sonar: objects of interest are detected and highlighted live during the mission

Meet us at INTERGEO 2024!

24 - 26 September Stuttgart, Germany

STAND E1.073

sales@evologics.com

sales-us@evologics.com

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.