The role of digital twins in mitigating urban heat islands
How cities are leveraging innovative technology for sustainable urban development
A decade of developments in Lidar technology
Mapping the plastic using UAV imagery
Enhancing building extraction with Lidar and aerial image data
Professional SLAM System
Exceptional Photo Quality
Remarkable Accuracy
Unmatched Flexibility
Director Strategy & Business Development
Durk Haarsma
Financial Director Meine van der Bijl
Technical Editor Huibert-Jan Lekkerkerk
Contributing Editors Dr Rohan Bennett, Frédérique Coumans, Lars Langhorst
Head of Content Wim van Wegen
Copy Editor Lynn Radford, Englishproof.nl
Marketing Advisors Myrthe van der Schuit, Peter Tapken
Circulation Manager Adrian Holland
Design Persmanager, The Hague
GIM International, one of the worldwide leading magazines in the geospatial industry, is published five times per year by Geomares. The magazine and related website and newsletter provide topical overviews and reports on the latest news, trends and developments in geomatics all around the world. GIM International is orientated towards a professional and managerial readership, those leading decision making, and has a worldwide circulation.
Subscriptions
GIM International is available five times per year on a subscription basis. Geospatial professionals can subscribe at any time via https://www.gim-international.com/subscribe/ print. Subscriptions will be automatically renewed upon expiry, unless Geomares receives written notification of cancellation at least 60 days before expiry date.
Advertisements
Information about advertising and deadlines are available in the media planner. For more information please contact our marketing advisor: myrthe.van.der.schuit@geomares.nl.
Editorial Contributions
All material submitted to Geomares and relating to GIM International will be treated as unconditionally assigned for publication under copyright subject to the editor’s unrestricted right to edit and offer editorial comment. Geomares assumes no responsibility for unsolicited material or for the accuracy of information thus received. Geomares assumes, in addition, no obligation to return material if not explicitly requested. Contributions must be sent for the attention of the head of content: wim.van.wegen@geomares.nl.
By leveraging Lidar and aerial-image digital surface models with deep learning, a Finnish study has explored different data combinations that enhance building detection accuracy in rural and urban areas.
A project in Belgrade analysed the potential for increasing the Ecological Index through various greening scenarios. Aerial photography and Lidar scanning were combined to create a high-quality 3D digital twin of the city’s infrastructure and trees.
Today’s mobile scanners are reliable tools, delivering accurate results across various applications. Yet, some industries still prefer traditional methods. As handheld scanners diversify, knowing their distinct advantages is crucial for optimal project outcomes and maximizing ROI.
Urbanization is fueling informal settlements globally, including in Fiji. To tackle related challenges, Fiji’s government has begun formalizing land leases, though spatial data gaps remain. This article suggests investing in capacitybuilding to leverage open mapping infrastructure.
Digital twins provide effective solutions for managing multiple urban heat island (UHI) challenges. This article shows how cities use digital twins and machine learning to understand heat, increase transparency, and engage communities in sustainable planning.
Lake Michigan’s water levels can fluctuate up to two meters over decades, affecting coastal sediments and infrastructure in ways not fully understood. This article highlights findings from offshore sand assessment and beach monitoring along the greater Chicago coast.
This article looks at the emergence of new Lidar technologies, applications for new technologies, improved computing power and nextgeneration AI, the broad range of solutions at both the high end and the low end of the market, and Lidar’s future potential.
To combat plastic pollution in oceans and coastal areas, the FIG formed the Mapping the Plastic Working Group. Using deep learning and UAV imagery, they developed a plastic mapping solution to address pollution before it reaches the ocean.
The front cover of this edition of shows strikingly colourful Lidar imagery of Nîmes, France. While every issue of GIM International inevitably contains some content related to Lidar, underlining the unmissable role it plays in modern-day land surveying and reality capture, this edition puts the laser scanner-based technology firmly in the spotlights with some extra attention. One of the articles even takes you on a journey to explore the evolution of Lidar over the past 10 to 15 years. (Image courtesy: Institut national de l’information géographique et forestière/IGN)
Lidar logic
Before I entered the geomatics sector more than 14 years ago, I’d never heard of Lidar, even though I was familiar with the widely known technology of radar. There is still some way to go before the term ‘Lidar’ becomes just as well established and fully accepted as ‘radar’, but I think it will definitely happen one day.
For countless land surveyors and other geospatial professionals, Lidar has become a valuable tool for geodata collection. It’s less affected by bad weather and poor visibility than cameras, which makes it a great friend in unpredictable conditions. Lidar can also see through vegetation and consistently captures objects without the lighting challenges that affect photogrammetry, making it ideal for reliable object recognition. This results in a rich, organized point cloud where buildings, roads and trees are easily classified.
Despite wide recognition of Lidar as a geospatial essential, there are still some areas of environmental measuring and mapping where the use of Lidar lags behind, namely in what is sometimes fondly referred to as the ‘wet’ branch of surveying: hydrography. However, it is rapidly catching up. There has recently been a noticeable rise in interest in the application of airborne Lidar bathymetry (ALB) in this sector, such as for mapping coastal waters. For such projects, it can be necessary to obtain accurate depth measurements down to 50m. While areas with depths of around 10-50m are often still surveyed using ‘traditional’ sonar technology, the use of ALB is becoming increasingly common when surveying areas less than 10m deep. The result is a detailed topobathymetric elevation model, for example, which can then serve as a starting point for effective coastal zone management.
On behalf of GIM International’s sister publication Hydro International, I attended Oceanology International in London earlier this year. The session on ALB drew a full room, and I was somewhat surprised at how many people seemed unaware of how Lidar can benefit the hydrography sector and adjacent fields. Having started as a complete novice in the world of mapping and surveying, it felt a bit strange for me to be the one with the knowledge advantage!
While we’re on the topic of overlaps between ‘dry’ and ‘wet’ surveying, the boundaries are becoming increasingly blurred. At this year’s edition of Intergeo, I was struck by how many exhibitors had a hydrographic background. Similarly, this month, the Hydro 2024 event is taking place in Rostock, Germany, and it includes several companies with a Lidar background – and these are originally ‘landlubbers’ who are rapidly ‘getting their sea legs’.
It’s great to see how everything is merging together. Every day, engineers, ecologists, urban planners and surveyors work with 3D data. What was once reserved for specialists is now within reach of a much wider group of professionals – including hydrographers. Airborne Lidar bathymetry is advancing rapidly, and is offering ever more depth (literally!) as the sensors become more advanced. In my view, it’s time for hydrographers to discover the logic of Lidar, and explore what it can bring to the surface.
Wim van Wegen head of content, GIM International wim.van.wegen@geomares.nl
CHCNAV’s i83 Pro GNSS receiver advances productivity and accuracy
CHC Navigation (CHCNAV), a leading provider of geospatial technologies, has introduced the i83 Pro, a versatile IMU-RTK GNSS receiver. Combining advanced GNSS capabilities with enhanced compatibility options to deliver superior performance, the i83 Pro is designed to meet the diverse needs of surveying, construction and mapping professionals. The i83 Pro represents CHCNAV’s commitment to providing surveyors with cutting-edge tools to improve productivity and accuracy, according to CHC Navigation Product Manager Zac Li. “It is equipped with a range of advanced technologies and a wide range of connectivity options. The i83 Pro offers optional Trimble RTX and OmniSTAR support and the optional Trimble MAXPro Positioning Engine to provide extended performance,” he said. The i83 Pro incorporates CHCNAV’s third-generation GNSS antenna and the latest iStar algorithm, increasing GNSS signal tracking efficiency by 30%. With 336 channels and support for GPS, GLONASS, BeiDou, Galileo and QZSS constellations, the i83 Pro delivers centimetre-level precision in seconds, even in challenging environments. The receiver also offers optional Trimble RTX and OmniSTAR support, providing RTK-level accuracy without relying on a base station or VRS network.
SOMAG advances airborne data precision with GSM 5000
SOMAG AG Jena unveiled its latest innovation in gyro stabilization technology at Intergeo 2024 with the launch of the GSM 5000. As the successor to the widely used GSM 4000, the new model brings significant advancements in precision and stability for airborne data acquisition, addressing the growing demands in fields such as terrain mapping, disaster management, agriculture, forestry and urban planning. The GSM 5000 builds on the foundation of its predecessor, improving performance in challenging flight conditions by offering a broader range of motion and enhanced capabilities to counteract aircraft movements and vibrations. One of its key improvements is an expanded stabilization range, providing increased pitch, roll and yaw control, which results in more stable and accurate data collection. A notable addition is the 355° absolute drift movement, which allows nearly full rotation along the drift axis. This improvement supports a variety of operational modes, including Step and Stare, Scan, and Pointing Mode, offering flexibility for different data collection tasks. The mount also features a larger internal diameter, allowing it to accommodate more advanced and larger sensor systems, which is crucial for complex missions requiring detailed airborne data.
The GSM 5000 delivers improved precision and stability for airborne data acquisition, catering to the evolving needs in terrain mapping. (Image courtesy: SOMAG AG Jena)
The i83 Pro is tailored to meet the versatile needs of surveying and construction experts.
EU funding is vital for pan-European geospatial data
Funding from the European Union (EU) plays a crucial role in ensuring that official pan-European geospatial data can support the development of Digital Europe’s Data Spaces. In its response to the programme’s stakeholder consultation, EuroGeographics – the not-for-profit organization representing Europe’s national mapping, cadastral and land registration authorities (NMCAs) – emphasized that reliable, authoritative data is just as vital as the infrastructure underpinning these data spaces. The association further warned that without sustainable, long-term funding mechanisms tied directly to EU policy objectives, future innovations like the award-winning Open Maps for Europe service may no longer be viable. Carol Agius, head of representation and stakeholder engagement at EuroGeographics, said: “Digital Europe funding has enabled our members to deliver harmonized pan-European open data from more than 40 countries through the Open Maps For Europe project. By driving innovation, market development and growth to support the digital economy, this supports the Open Data and re-use of Public Sector Information (PSI) Directive.” She continued by explaining that with additional funding from the Digital Europe programme, EuroGeographics is expanding this work through the Open Maps for Europe 2 (OME2) initiative. Together with members from Belgium, France, Greece, Spain and the Netherlands, the OME2 consortium is developing a new production process and technical specification to release a prototype for edge-matched, large-scale panEuropean datasets covering 10 countries. This project is designed to tackle challenges related to accessing and licensing authoritative high-value geospatial data across Europe. Agius also highlighted how OME2 supports the European Strategy for Data, particularly through the implementation of the Open Data and re-use of PSI Directive, the release of high-value datasets, and the evolution of the INSPIRE Directive towards the GreenData4All initiative. The project, she emphasized, showcases how demand for geospatial information across various common data spaces can be effectively met.
NavVis MLX unlocks new capabilities in handheld scanning
The development of Digital Europe’s Data Spaces relies heavily on EU funding to maintain access to authoritative pan-European geospatial data. (Image courtesy: MeshCube/Shutterstock)
NavVis has introduced the NavVis MLX, a handheld dynamic laser scanning tool designed for professionals. This new device aims to set a new benchmark for data quality, portability and user comfort, making it an ideal choice for projects requiring high-precision results in challenging environments. NavVis has become a prominent player in reality capture and digital twin technology over the past decade. Founded in 2013 as a startup with support from the Technical University of Munich, the German enterprise made a major impact in 2020 with the launch of its flagship wearable laser scanner, the NavVis VLX. By showcasing how a SLAM-based device could serve as an essential tool for reality capture and surveying professionals, NavVis quickly positioned itself as a pioneer in the wearable laser scanning market. In the past four years, NavVis customers and resellers around the world have digitized hundreds of millions of square metres. Now, with the addition of the NavVis MLX to its LX series, the company is once again poised to redefine another category in reality capture. Felix Reinshagen, co-founder and CEO of NavVis, emphasized the company’s commitment to delivering portable laser scanning solutions that meet the needs of industries such as AEC, surveying and reality capture. “Our customers have emphasized the need for a professional, affordable tool that can scan in narrow, hard-toreach places that complements NavVis VLX,” said Reinshagen.
The NavVis MLX was developed to offer a lightweight, portable option for confined spaces, maintaining NavVis’s renowned usability and data quality. Overcoming the engineering challenges to create this device, he added, was no small feat, and he is incredibly proud of the team’s achievement. With the MLX, the company seeks to significantly reshape the perception of handheld laser scanning tools, which have often been seen as limited due to issues with low-quality data, poor ergonomics, and unreliability. The MLX addresses these concerns by delivering high-quality point cloud data through a 32-layer Lidar sensor, along with 270- and 360-degree panoramic images for an intuitive understanding of the environment.
The MLX offers a versatile and accessible solution for both indoor and outdoor environments. (Image courtesy: NavVis)
Enhancing mobile mapping with high-resolution machine vision expertise
Trimble’s Reality Capture platform provides a streamlined web solution for managing point clouds and 360° imagery.
(Image courtesy: Trimble)
Trimble Connect broadens geospatial data capabilities
Trimble has launched the Trimble Reality Capture platform, a service aimed at streamlining collaboration and securely sharing large-scale reality capture datasets. Collected through 3D laser scanning, mobile mapping and uncrewed aerial vehicles (UAVs or ‘drones’), these datasets are now more accessible across various industries. Integrated into Trimble Connect, a cloud-based platform supporting over 30 million users, this service strengthens Trimble’s connected workflow by bridging the physical and digital worlds and unlocking the full potential of reality capture data. Aimed at fostering collaboration among owners, contractors, surveyors and other stakeholders, the Trimble Reality Capture platform offers an intuitive, web-based solution for managing point clouds and 360° imagery. It enables users in fields such as construction, surveying, transportation, utilities, energy and mining to work on complex reality capture projects more efficiently while maintaining the accuracy of the original data. For many organizations, this platform represents a significant advancement, allowing seamless collaboration between teams in the field and the office. It provides a centralized location for designers, engineers and other stakeholders to review and assess project data, creating efficiencies and improving decision-making. This new service democratizes access to reality capture data, including vast datasets collected by Trimble’s terrestrial laser scanners like the MX series and X9, as well as third-party hardware.
AI-InfraSolutions’ collaboration with Stemmer Imaging has resulted in the development of a high-performance 360° panoramic camera head, offering advanced imaging capabilities.
(Image courtesy: AIInfraSolutions)
AI-InfraSolutions has partnered with Stemmer Imaging to develop an innovative 360° panoramic camera head. This collaboration, which integrates AI-InfraSolutions’ mobile mapping capabilities with Stemmer Imaging’s high-resolution machine vision expertise, marks a significant advancement in high-quality geospatial data acquisition and analysis. Against the backdrop of the growing demand for precise geospatial data, this partnership has led to the creation of a state-of-the-art 360° panoramic camera head that is aimed at redefining the future of geospatial mapping. It is designed for automated data collection in applications such as identifying traffic signs, assessing road conditions and monitoring the environment. This technology enables municipalities and companies to efficiently manage assets and adopt predictive maintenance strategies. The 360° panoramic camera head, engineered through Stemmer Imaging’s More Engineering Services and supported by its Technical Competence Center (TCC), delivers 180MP resolution with 360° horizontal and 120° vertical coverage. It captures synchronized images every 5m at driving speeds of up to 100km/h, making it an ideal solution for large-scale geospatial mapping projects. The development of the 360° camera head posed several technical challenges, including selecting optimal sensors, synchronizing image capture with GNSS systems, and minimizing parallax for smooth panoramic stitching. Stemmer Imaging’s engineering expertise and precise intrinsic calibration were instrumental in addressing these challenges. The result is a robust solution capable of continuous high-speed recording for up to 247km (expandable to 496km).
Harxon puts precision and innovation at the forefront with new GNSS antenna series
Harxon has unveiled a new series of GNSS antennas designed to meet the ever-evolving demands of industries reliant on precise geospatial data. This latest product release includes smart antennas, OEM antennas and a first-ever anti-jamming series, all designed to operate in complex environments with high precision. With its new range of GNSS receivers, Harxon – a global leader in positioning technology – aims to set a new standard for reliability and efficiency. As the world moves toward greater automation and connectivity, from uncrewed aerial vehicles (UAVs or ‘drones’) to robotics, Harxon is committed to developing innovative solutions for enhanced accuracy and performance. The new smart antenna series combines advanced RTK positioning modules with Harxon’s antenna technology, offering a compact and easily integrated solution. This series provides a range of accuracy options, from single-point metre-level to RTK centimetre/millimetre-level precision. The HX-MR401A and HX-MR402A (housed versions) as well as HX-ME403A and HX-ME404A (embedded versions) are all specifically designed for UAV applications, utilizing Harxon’s low-profile helix antenna for optimized performance.
Harxon has expanded its product range with a new series of combined antennas.
(Image courtesy: Harxon)
Bentley acquires Cesium to elevate digital twin technology
Bentley Systems has announced the acquisition of 3D geospatial leader Cesium, known for its groundbreaking open platform that drives the creation of advanced 3D geospatial applications. Cesium’s 3D Tiles standard has become a global benchmark, adopted by top enterprises, governments and tens of thousands of developers worldwide. Its software-asa-service (SaaS) platform, Cesium ion, powers immersive 3D geospatial experiences across over a million active devices each month, while its open-source tools have surpassed ten million downloads. The iTwin Platform, Bentley’s backbone for digital twin solutions used by engineering and construction firms as well as asset operators, is now further enhanced by Cesium’s capabilities. This integration enables developers to effortlessly synchronize 3D geospatial data with engineering, subsurface, IoT, reality and enterprise data. The result? Digital twins that offer unparalleled user experiences – scalable from vast infrastructure networks to the precise details of individual assets, spanning views from land, air, sea, outer space and even deep underground. “A 3D geospatial view is the most intuitive way for owner-operators and engineering services providers to search for, query and visualize information about infrastructure networks and assets. With the combined capabilities of Cesium and iTwin, infrastructure professionals can make better-informed decisions in full 3D geospatial context –all within a single, highly performant environment,” stated Bentley CEO Nicholas Cumins.
Frankfurt to be annual host for Intergeo from 2028
Bentley Systems has acquired Cesium, uniting two leading open platforms and developer communities for the built and natural environment. (Image courtesy: Bentley Systems)
From 2028 onwards, Intergeo, sponsored by DVW, will be organized by Mesago Messe Frankfurt, taking over from the event’s long-time partner, Hinte Expo & Conference. DVW (the German Association for Geodesy, Geoinformation and Land Management) and Hinte will continue their successful collaboration up to and including Intergeo 2027. Looking ahead, DVW and Mesago Messe Frankfurt aim to further evolve the event, focusing on futureproofing the trade fair. The consistent growth and increasingly international profile of Intergeo have prompted this strategic shift. Sustainability will also play a central role, with initiatives like reusing stand materials and streamlining organizational processes. Intergeo will be held annually in Frankfurt from 2028 onwards, with the permanent October slot expected to be well received by many companies as it provides greater consistency for planning and participation. Frankfurt am Main, often referred to simply as ‘Frankfurt’, is a major financial and business hub in Germany. Known for its iconic skyline and metropolitan atmosphere, the city is also a global transportation hub, featuring one of the world’s busiest airports as well as excellent rail and road connections. Prof Rudolf Staiger, president of DVW, expressed his enthusiasm at the collaboration with Mesago Messe Frankfurt. He believes this partnership will bring new opportunities for Intergeo, saying: “We are very much looking forward to working together with the Mesago Messe Frankfurt team and have no doubt that we will be able to take Intergeo to the next level together in a way that will benefit exhibitors and visitors alike.” He also highlighted the advantages of Frankfurt as a permanent venue, noting that its central location and consistent scheduling will serve all participants well.
A view of Frankfurt’s Old Town and the River Main in early autumn, with the historic Kaiserdom cathedral in the foreground and the city skyline, including the European Central Bank, in the background. (Image courtesy: Elen Marlen/Shutterstock)
Examining data combinations for improved accuracy in rural and urban environments
Enhancing building extraction with Lidar and aerial image data
By Emilia Hattula, Lingli Zhu and Jere Raninen
With the expansion of artificial intelligence (AI) applications, automating building extraction from remote sensing data has gained significant traction. By leveraging Lidar and aerial-image digital surface models with deep learning, a Finnish study has explored different data combinations that enhance building detection accuracy in rural and urban areas.
Building extraction from remote sensing data has evolved with advances in deep learning and convolutional neural networks (CNNs). CNNs, and particularly the UNet model, have shown strong performance in extracting structures from complex landscapes. However, accuracy is influenced not only by the model architecture, but also by the type of data used. Digital surface models (DSMs), which provide essential height information, are increasingly popular in remote sensing for their ability to highlight features crucial for building detection. In this context, combining Lidar DSMs and digital elevation models (DEMs) with true orthophotos offers promising improvements in detection, particularly when working with high-resolution data such as 25cm-pixel Lidar-derived models.
Selecting two test areas in Finland
A research project in Finland utilized multiple datasets from the country’s National Land Survey. Two areas were selected: the urban and forested regions of Savonlinna, and the
suburban and rural landscapes of Pudasjärvi. High-resolution (25cm) data (either Lidar DSMs or aerial-image DSMs) and DEMs were integrated with true orthophotos to assess their performance. The pixel resolution at the Savonlinna test site was 30cm, and for Pudasjärvi it was 25cm. Each dataset allowed detailed comparisons, measuring the impact of DSM and DEM variations on detection accuracy across diverse terrains. Examples of test datasets can be found in Figures 1 and 2.
Analysis of the results
The UNet model trained on Lidar DSMs consistently demonstrated better accuracy in building shape detection than when aerial DSMs were used. Tests showed that, particularly in forested areas, the model’s performance improved with Lidar, as vegetation and shadows interfered less with the detection process. Aerial DSMs, though effective in urban settings, sometimes blurred building boundaries due to shadows and overlapping features. In contrast, Lidar
DSMs provided clearer delineation, capturing nuanced edges of structures. However, in cases where buildings were absent in the Lidar dataset, such as due to roof material reflections or moisture, aerial DSMs were essential to fill the gaps.
Following this comparative analysis of the DSM types, the researchers took a more detailed look at the results from the urban and rural tests, as follows:
Savonlinna (urban and forested areas) For urban areas, Lidar DSMs reduced false detections around water bodies and produced clearer building boundaries. In forested areas, where shadows from trees can obscure features, the Lidar DSMs consistently yielded more accurate results. When 25cm-resolution data was not available, results showed a slight drop in accuracy, highlighting the benefit of highresolution Lidar DSMs. An example of the building detection result from the urban area can be seen in Figure 3. The figure also
Figure 1: Data about the suburban area (from left to right) shown as a true orthophoto, aerial-image DSM and Lidar DSM.
shows false detections when using aerial-image DSMs in the forested area.
Pudasjärvi (suburban and rural areas)
Rural areas presented unique challenges, such as inconsistencies in water height data that led to false positives. To address this, false water heights were removed from the Lidar DSM, significantly reducing detection errors. Models trained with the high-resolution 25cm DEM outperformed those using resampled 2m DEM data, affirming that higher-resolution Lidar data aids in more precise building extraction. Figure 4 shows an example of the building detection result in a rural area. Due to the effect of shadows, one building was missing from the detection result of the aerial-image DSM.
Challenges and data integration strategies
The National Land Survey of Finland acquires aerial imagery in a three-year cycle covering the whole country, whereas Lidar data has a six-year cycle. While Lidar DSMs improve accuracy, issues with missing buildings and false heights in water bodies pose challenges. These inaccuracies, particularly when data is outdated or not synchronized with true orthophotos, emphasize the need for year-matched datasets. Further, although Lidar DSMs are valuable in forest environments, urban areas with many small buildings or complex rooftop materials (e.g. causing reflection or moisture) benefit from a combined approach that includes both Lidar and aerial image data.
Figure 2: Data about the rural area (from left to right), shown as a true orthophoto, aerial-image DSM and Lidar DSM.
Figure 3: Results from the urban area. Left: Aerial-image DSM and the result of building detection in blue, with yellow indicating false detections. Right: Lidar DSM and the result of building detection in red.
About the authors
Conclusion
Emilia Hattula received her MSc degree from Aalto University, Finland, in 2022. Her research interests are in the application of machine learning techniques in the field of remote sensing. Currently, she is working as an IT specialist for the National Land Survey of Finland.
Dr Lingli Zhu is leading an AI team at the National Land Survey of Finland. She has published 60plus publications with over 1,500 citations. She specializes in the fields of photogrammetry, laser scanning, remote sensing and GIS, using computer vision and machine learning.
Jere Raninen has an MSc in Computer Science from the University of Eastern Finland. Currently, he is working as an IT expert at the National Land Survey of Finland. From 2021 onwards, he has worked on multiple AI projects at NLS focusing on using deep learning and multi-task learning methods, for automatic road segmentation from aerial images and Lidar data.
Lidar DSMs, paired with high-resolution DEMs, significantly enhance building detection accuracy, especially in forested areas. This study suggests that combining Lidar and aerial image data produces optimal results, catering to the strengths of each data type.
Future research should investigate 3D data integration for improved modelling in areas with dense vegetation or complex building structures. The findings support the growing role of Lidar in improving AI-driven extraction processes, especially as applications expand into more diverse landscapes.
Further reading
Hattula, E., Zhu, L., & Raninen, J. (2024). Building extraction in urban and rural areas with aerial and Lidar DSM. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 10, 73-79.
Hattula, E., Zhu, L., & Raninen, J. (2023). Advantages of using transfer learning in remote sensing. Remote Sensing, 15(17).
Figure 4: Results from the rural area. Up: Aerial-image DSM and the result of building detection in blue, with yellow indicating a missed building. Down: Lidar DSM and the result of building detection in red.
Digitize the World. Fast. Precise. Efficient.
RIEGL’s Ultimate LiDAR™ Technology offers a wide range of performance characteristics and serves as a platform for continuing Innovation in 3D for the LiDAR industry.
RIEGL now proudly presents the new products launched at Intergeo 2024 in Stuttgart!
Maximum user-friendliness for acquisition and processing of UAV-based scan and image data
RIEGL VUX-10025 with RiLOC-F
RIEGL’s latest VUX-Series sensor complements the wide range of UAV-based laser scanners by a version providing an extremely large field of view of 160 degrees for an extra wide area coverage. It is available as stand-alone laser scanner but also as fully integrated laser scanning system for UAV-based data acquisition.
In combination with RiLOC-F, RIEGL’s subsystem for localization and orientation including appropriate RIEGL software, a simplified user-friendly workflow for data acquisition but also for post-processing results in precisely aligned and consistent point clouds.
Utmost performance in topography, mining, monitoring, and cultural & natural heritage documentation
RIEGL VZ-4000i25
Providing up to 4,600 meters measurement range, RIEGL’s long-range version scanner of their latest generation of professional terrestrial laser scanners stands for an increase in productivity, reliability in performance, extreme versatility and new forms of connectivity – resulting in an excellent return on investment.
Utmost efficiency in high-point density ultra-wide area mapping and ultra-high resolution city mapping
RIEGL VQ-1560 III-S
The cross-fire scan pattern and the wide operational range make RIEGL’s latest Dual Channel Waveform LiDAR Scanning System one of the most versatile airborne laser scanners on the market today. It is perfectly suited for any kind of application – from ultra-dense corridor mapping from low altitudes to large-scale wide area mapping at utmost efficiency.
Unlock the power of RIEGL LiDAR for your applications!
Maximizing savings and improving efficiency with the right system
The path to accurate 3D data with handheld scanning solutions
By Gian-Philipp Patri, Leica Geosystems
Today’s mobile or ‘kinematic’ scanners are well-developed, reliable tools that consistently deliver accurate results across multiple applications. Despite this progress, some industries remain cautious, relying on more traditional methods. As handheld mobile scanners become more varied, understanding the differences between these options and their specific advantages is key to achieving optimal project outcomes and maximizing return on investment.
Before looking at handheld scanners in particular, it is important to understand where mobile scanning came from, and which applications suit it best. Laser scanning used to rely exclusively on static, tripod-based scanners that – while essential for many situations that require high accuracy – required more set-up time and could only be operated by experts with years of experience under their belts. Since then, however, technological advancements have led to the introduction of compact and portable mobile scanning devices for reliable 3D data capture in a wider range of
use cases. These cater to a user base that appreciates accuracy, but also prizes ease of use, flexibility and speed.
Mobile scanning technology has quickly evolved, and various mobile scanning solutions exist, including vehicle- and robot-mounted systems as well as handheld systems, all offering fast, efficient and comprehensive 3D data capture while moving through an environment. This makes them suitable for projects in a range of industries and applications where speed and ease of use are essential.
Versatile solution
Handheld scanners set themselves apart by enabling rapid and effective data capture of smaller, specific areas, spaces, structures or elements, such as the interiors and facades of individual buildings. It is important to note that capabilities vary widely, with certain scanners better suited for quick dimensional estimations and others designed to collect rich data in high-accuracy applications. The more sophisticated scanners offer a versatile solution to professionals in industries including architecture, engineering and construction (AEC), manufacturing, real estate and public safety.
In the AEC industry, for example, the speed of data capture is critical to keeping projects on schedule and avoiding costly delays. Handheld systems allow professionals to quickly scan construction sites at different stages of a project to ensure materials and installations are fitting as planned. In one project in downtown New York, for example, renovation was planned for a large building, requiring new as-built drawings. A handheld scanner accurately captured the entire five-storey building, comprising 120 rooms and 4,500m2, in just 90 minutes. The result was an accurate 3D point cloud of the entire interior of the building.
Similarly, speed is often a priority in public safety applications. Handheld scanning systems allow law enforcement agencies to capture crime scenes quickly, preserving
A handheld laser scanner can be used to capture complex spaces, as shown here with the BLK2GO.
crucial data without disrupting the scene for long periods. For example, in a highprofile murder investigation in the Amazon, members of the Brazilian Federal Police used handheld scanners in their forensic investigation. Using a handheld scanner enabled the team to quickly capture the scene before any crucial evidence could degrade. They used the captured data to build a forensic digital twin, allowing them to superimpose evidence from different days and meticulously reconstruct the whole crime.
Ask yourself the right question
When choosing between a static and mobile scanner, people often oversimplify it as a trade-off between speed and accuracy. Instead, the real question should be: ‘What is the most efficient way to gather the 3D data I need to generate meaningful insights?’. Even within one project, the answer to that question can differ. If you’re updating the floor plan of an old building, for example, scanning a simple hallway may just require a single scan from one location. Using a mobile scanner would not offer significant
efficiency benefits over a static scanner. However, for capturing the building’s staircase, for instance, a handheld mobile scanner could be up to ten times faster, because you can scan as you walk rather than setting up a static scanner several times. Multiple technologies from the same vendor can be seamlessly deployed in the same project nowadays because the various systems use the same software processing workflow. This reduces the additional data processing workload.
One factor when using multiple systems is cost. However, to put the extra cost in perspective, users can think about the return on investment (ROI) thanks to improved efficiency and time savings. If you can now complete two jobs in a day rather than one, the additional investment quickly pays for itself. Efficiencies also come from not having to stop operations in the area you capture, such as on a busy construction site. The more frequently you use these systems, the greater the savings, and the quicker the payback.
Considerations when choosing handheld
When choosing between different handheld scanners, users should consider a variety of factors, including:
• Simplicity of capture and freedom of movement: Every minute of wasted time snowballs over a project, reducing the ROI. Data capture efficiency is maximized with scanners that function well in multiple orientations and allow the user to move constantly without stopping.
• Scanner weight: When scanning for hours every day on the move, even a couple of kilograms can make a difference to the physical burden. Moreover, many countries have laws governing how long workers can carry equipment in a certain weight category.
• Desired output: Some mobile scanners offer more data accuracy, which is important if you need precise measurements for detailed analysis or design adjustments. Others offer a better-looking point cloud, providing better visualization for presentations and reports when communicating with other stakeholders. Some scanners even allow
A handheld laser scanner proved to be an ideal choice for capturing the 17th-century Citadel of Besançon in France.
The real question should be: ‘What is the most efficient way to gather the 3D data I need to generate meaningful insights?’
you to view a colourized scan on your smartphone in real time as you walk around, and export that immediately without additional data processing. This could be essential when quick on-site decisions are needed.
• Data processing speed: Fast-paced projects, such as active construction monitoring or scanning hundreds of apartments in order to sell them, require quick turnaround times on the same day. For periodic scans, such as for annual facility documentation or renovation planning, it is usually no problem if the data is processed overnight.
• Processing flexibility: Cloud-based data processing is often preferred for its simplicity and avoids having to pay for servers. However, there are situations in which you may need to process the data using your own hardware – e.g. in high-security industries where data sensitivity is a concern, to achieve quick processing in remote locations without Wi-Fi or a mobile connection, or simply to suit the current workflow. Flexible handheld systems offer both options, allowing you to choose the best workflow for your needs
About the author
Gian-Philipp Patri is product manager for Leica BLK2GO and Leica BLK2GO PULSE within the reality capture division of Hexagon Geosystems. He holds a BSc in Geomatics Engineering and an MSc in Geomatics Science from the Technical University of Graz, Austria, and is currently studying for an Executive MBA in Business & IT at the Technical University of Munich, Germany.
• Cost: When evaluating the cost of the purchase, look beyond the initial cost of the technology to also take the lifetime costs and long-term savings into account. For example, selecting a scanning solution that ensures a streamlined workflow and cuts down the scanning time will lead to substantial savings in human resource costs. Similarly, choosing a solution that cuts down on the time it takes to clean up data can significantly reduce the overall operational cost.
Better outcomes
As mobile scanning technology continues to evolve, its ability to deliver fast, accurate and actionable data will only improve. With mobile scanners – whether vehicle-mounted, robot-mounted or handheld systems, depending on their specific needs – industry professionals can streamline their workflows, boost productivity and ultimately drive better outcomes.
Portable laser scanners offer significant benefits for restoration projects, especially in preserving cultural heritage.
How cities are leveraging innovative technology for sustainable urban development
The role of digital twins in mitigating urban heat islands
By Mila Koeva, Aradhana Tripathy and Amir Afzalinezhad, University of Twente
As cities continue to expand, the need for innovative, sustainable solutions to urban heat island (UHI) effects becomes more urgent. Digital twins offer a promising approach to addressing UHI challenges such as higher energy consumption, poorer air quality and adverse health impacts. This article gives examples of their effectiveness in urban planning – from a better understanding of how heat behaves supported by machine learning models, to improved transparency and community engagement – and presents real-world case studies highlighting how cities can leverage this technology for sustainable urban development.
Urban heat island (UHI) effects occur when cities become significantly warmer than their surrounding rural areas due to human activity, dense construction and limited green spaces. These effects pose significant challenges to the health and well-being of city dwellers, particularly as urbanization continues to grow. With two-thirds of the global population projected to live in cities by 2050, addressing UHI challenges is critical for creating more livable and climateresilient urban environments.
One of the primary factors driving UHI is urban morphology: the design, layout and materials used in city infrastructure. Dense environments with tall buildings, narrow streets and heat-retaining materials intensify UHI conditions. For example, ‘street canyons’ – created when buildings are much taller than the width of streets – trap heat and reduce airflow, worsening UHI effects. The lack of green spaces further exacerbates this problem by limiting natural cooling opportunities. In contrast, cities that incorporate green infrastructure, like trees, parks and green roofs, can significantly reduce surface temperatures, improve thermal comfort and create more livable urban environments.
Urban form vs environmental performance
Digital twins (DTs) offer a promising approach for integrating these naturebased solutions into urban planning to mitigate the UHI effect. DTs are dynamic virtual models that replicate real-world environments and are continuously updated with real-time data, enabling urban planners to simulate different strategies for reducing urban temperatures. Planners can test the impact on UHI formation of adding green spaces, altering building materials or redesigning streets before making actual changes. By providing a live, interactive view of city systems, DTs allow for more informed, data-driven decisions. They also offer insights into the relationship between urban form and environmental performance, such as how building placement affects airflow and solar energy potential – both of which influence UHI intensity.
One innovative use of digital twin technology involves 3D city modelling and environmental simulations. With the integration of geospatial data like Lidar and satellite imagery, planners can create detailed models that simulate the impact
of different urban forms on temperature distribution. These models help planners assess how changes to building height, orientation and material properties affect heat retention in specific areas. By visualizing factors like solar radiation and wind patterns, DTs become a powerful tool for predicting the outcomes of planning decisions and optimizing strategies for better urban cooling.
Machine learning models
Research has shown the effectiveness of digital twins in urban planning. For example, machine learning models within digital thermal twins allow planners to understand how heat behaves in urban environments and predict how different layouts or materials might affect temperature. The 3D visualizations provided by DTs give planners an opportunity to see how buildings, streets and green spaces interact, either to trap heat or improve airflow, which helps cool down urban areas. As a result, DTs offer cities the ability to make smarter, more climate-responsive planning decisions.
Another valuable application of digital twins is their ability to involve the public in urban planning. Interactive models allow
community members to see how proposed changes could affect their neighbourhoods and provide feedback, ensuring that planning decisions reflect the needs and concerns of residents. This engagement promotes transparency and collaboration in the urban planning process.
Integration challenges
However, integrating digital twins into urban planning still presents some challenges. Significant hurdles include data quality, real-time sensor integration, and the costs associated with maintaining DT systems. Additionally, digital twins require a steady stream of reliable data, which can be
difficult to collect consistently across large urban areas. Nevertheless, as technology improves and more cities adopt DTs, these tools will likely become essential for tackling environmental challenges like the UHI effect.
The two case studies below highlight how cities can leverage digital twin technology in urban planning processes to combat UHI effects and support sustainable urban development.
Case study: Enschede, the Netherlands
Enschede, a city in Twente in the eastern region of the Netherlands, has been dealing with the impacts of urban heat islands due to its increasing urbanization and the associated rise in impervious surfaces. The city is particularly vulnerable to the UHI effect during heatwaves, which have become more frequent and intense in recent years.
The challenge in this case was to develop a tool for addressing UHI through datadriven decision-making. This meant creating a 3D environment that could visualize the modelling of different weather parameters, along with the capability to model the thermal comfort of a modified, built environment. It was not enough to simply visualize the pattern of thermal comfort using physiological equivalent temperature (PET) in different complex combinations of weather parameters (such as wind flow intensity, wind direction, humidity and air temperature). The aim of the project was also to see the direct effect of modifying the built environment on the immediate neighbourhood.
In collaboration with the University of Twente, the entire project was developed on open-source software and methodologies. Open-source 3D information from the national AHN-4 Lidar point cloud dataset was used to form the base conditions of the 3D model, and the weather information was taken from the EPW Typical Meteorological Year 5.2 global dataset, which used the Twente airfield weather sensor. This allowed for the modelling of weather conditions with up to one hour of accuracy in any given typical month.
Figure 1: Developed 3D built-form scenarios with visualization of the changes in modelled wind, temperature and PET.
Using data extracts from the two equinoxes and two solstices, the pattern of thermal comfort was calculated for the existing built form. Then, the 3D environment itself was modified to introduce tall buildings while eliminating small buildings in a given neighbourhood. The thermal comfort patterns were again calculated on different days, with the equinoxes forming the average conditions while the solstices formed the most extreme ones.
As a result, Enschede’s digital twin system is a detailed 3D model of the city that simulates patterns of thermal comfort using a modified PET indicator. This model allows city planners to visualize how different parts of the city react to changes in the built form in terms of thermal comfort on the ground. It also enables them to assess how various interventions can be planned to offset the predicted thermal hotspots created by introducing new buildings. The process of identifying developed hot zones and cool zones due to newly built forms involves using simulations to assess how changes in urban structures, such as the construction of new buildings, impact the distribution of heat in a city (Figure2).
The goal, of course, remains to replace the global weather dataset with real-time sensor information to create an even more accurate weather model to use as the basis for decision-making. However, until the network of sensors can be strategically planned and
placed, the global dataset suffices to give an overall idea of the thermal comfort trends to be expected.
Case study: Wuppertal, Germany
Wuppertal, a city situated in a valley along the Wupper river in Germany, faces unique UHI challenges due to its topography and dense urban fabric. The limited airflow along the river exacerbates the UHI effect by trapping heat in the lower parts of the city. With projections indicating a significant increase in the number of hot days by 2060, effective planning is essential to mitigate UHI effects.
A project was launched to develop an interactive platform for visualizing various scenarios and their potential impacts on decision-making in Wuppertal. This resulted in the Digital Twin-
The digital twin is a 3D city model simulating thermal comfort with a modified PET indicator
Figure 2: Identifying developed hot zones and cool zones due to the newly built forms.
Figure 3: Current situation – without a water body, the temperature is 15.32 °C. (Image courtesy: University of Twente, 2024)
Figure 4: Future planning scenario – with the water body, the temperature is 13.08 °C. The reduction in UHI intensity is about 18.26%. (Image courtesy: University of Twente, 2024)
Based Planning Support System (DT-PSS) to simulate the effects on UHI formation of various urban planning scenarios, such as increasing green spaces, constructing new buildings, increasing population density or modifying layouts. The DT-PSS framework integrated past data collection and analysis, the creation of a 3D city model using real-time temperature data, and the evaluation of scenarios to predict UHI formation. Machine learning, real-time data and 3D city modelling were combined to create an innovative simulation system. A random forest model was trained using nine key predictor variables, including Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI) and population density. These variables were selected for their strong statistical correlation with temperature.
This predictive model was then combined with real-time temperature data collected from ground sensors in Wuppertal, allowing continuous updates to the digital twin. The integration of real-time data enabled the system to simulate temperature variations in response to different planning scenarios. The trained model was incorporated into a 3D city model of Wuppertal and imported into the Unreal Engine (UE) platform, which facilitates advanced visualization and interactive simulations. This allows users to explore the impact on UHI formation of interventions such as altering building density or adding green spaces (Figures 3 and 4).
To ensure practical applicability, the system was designed with input from urban planners and municipal officials. A workshop was conducted where stakeholders used virtual reality (VR) glasses to
Acknowledgements
The authors would like to express their sincere gratitude to Dr Pirouz Nourian, Dr Cristine Pohl and Benjamin Bleske for their valuable contributions to the joint supervision and collaboration.
Further reading
• Cárdenas, I., Koeva, M., Davey, C. & Nourian, P., (2024). Urban digital twin-based solution using geospatial information for solid waste management. Sustainable Cities and Society
• Campoverde C, Koeva M, Persello C, Maslov K, Jiao W, PetrovaAntonova D. Automatic Building Roof Plane Extraction in Urban Environments for 3D City Modelling Using Remote Sensing Data. Remote Sensing. (2024); 16(8):1386. https://doi. org/10.3390/rs16081386
• Cárdenas, I., Koeva, M., Davey, C. & Nourian, P., (2024). Solid Waste in the Virtual World: A Digital Twinning Approach for Waste Collection Planning. Springer, p. 61-74 (Lecture Notes in Geoinformation and Cartography; vol. XIII)
• Kumalasari, D., Koeva, M., Vahdatikhaki, F., Petrova Antonova, D., & Kuffer, M. (2023). Planning Walkable Cities: Generative Design Approach towards Digital Twin Implementation. Remote Sensing, 15(4), 1088
• La Guardia, M., & Koeva, M. (2023). Towards Digital Twinning on the Web: Heterogeneous 3D Data Fusion Based on OpenSource Structure. Remote Sensing, 15(3), 721.
About the authors
Mila Koeva is an associate professor at the University Twente, Faculty of Geo-Information Science and Earth Observation ITC, the Netherlands. Her main areas of expertise include photogrammetry and remote sensing for cadastral mapping, urban planning, 3D modelling and digital twins, among others. She leads the Digital Twin Geohub, ISPRS IV/WG 9 and FIG VII/4 AI4LA.
Aradhana Tripathy is a GIS specialist with a background in urban planning and architecture. She specializes in conducting research to understand the relationships between urban morphology and climate resilience through 3D modelling and remote sensing.
Amir Afzalinezhad is an urban planner with master’s degrees in Urban Planning from the University of Tehran, and the Faculty of GeoInformation and Earth Observation from the University of Twente. He specializes in digital twins, geographic information systems (GIS), 3D modelling and remote sensing, aiming to develop innovative solutions that promote sustainable urban development.
interact with the model, explore ‘what-if’ scenarios and provide feedback on the system’s usability and relevance, making it more responsive to their needs.
A rough mobile version of the DT-PSS has also been developed to enhance accessibility and usability for urban planners and stakeholders. The advantages of the mobile version can be extended to citizens, as it empowers them to engage actively in the urban planning process. Providing an intuitive interface makes it possible for citizens to access and interact with the digital twin, allowing them to visualize how different urban development scenarios may affect their lives and neighbourhoods. This capability helps transparency and collaboration between planners and residents, enabling citizens to provide input and feedback on proposed interventions. Moreover, the mobile platform encourages a sense of involvement in local planning decisions, as citizens can participate in assessing the immediate effects of environmental changes and advocate for strategies that prioritize their community’s well-being. Overall, the mobile DT-PSS enhances public engagement, making urban planning more inclusive and responsive to the needs of the community.
Insights from a gathering of innovators
Intergeo 2024 through a reporter’s eyes
By Wim van Wegen, GIM International
What are the lasting memories after spending three days at Intergeo? Which topics, trends and solutions made the biggest impression? While far from complete, this report is a personal reflection – seen through the eyes of GIM International’s head of content – on the world’s most important gathering of geospatial professionals, which this year took place from 24-26 September in three halls of the famous Messe Stuttgart venue, located some 10km outside of the city centre.
If there is one topic that has increasingly run as a common thread through all the events I have attended in the last few years, it has artificial intelligence (AI). However, as a representative of one of the most important media brands in the world of mapping and surveying (which is quite a task in itself!), I feel the need to provide some context to separate the wheat from the chaff.
Need for intelligent solutions
During the 2024 edition of Intergeo, AI (or ‘machine learning’, as many experts in our field prefer) is omnipresent. Speaking on how the role of AI will evolve in the geospatial sector, Bernhard Richter, vice president of Leica Geosystems’ geomatics division, says: “AI is something you cannot just grab”. He emphasized that the sector should focus on which AI-based solutions are really solving problems or making things less error-prone
thanks to their ability to predict mistakes so that they can be avoided them. Richter mentioned NVIDIA as a key player here, as “they are in charge of all the algorithms.”
Richter is among the experts who prefer to talk about machine learning rather than AI. “Machine learning is something that is tucked away under the hood, and the fact that it is something that the surveyor can rely on is a great advantage for his or her work. Systems can be more robust by using machine learning, but you will always need a total station,” Richter believes.
The lack of new geospatial professionals remains a worrying topic, according to Chris Trevillian, senior director product go to market: “A lack of fundamental knowledge threatens the industry. This requires advanced intelligent solutions from Trimble,
including smart scanners, smart workflows, versatile data solutions, robust and safe cloud-based environments. It is all about the full ecosystem and workflow, with timely support as critical aspect.” He believes machine learning will be very helpful in an era with possibly not enough surveyors around. Object detection, pavement recording and stockpile measuring are all examples of things that can be done with the help of machine learning.
Portable mapping systems
Anyone who has spent a day, or in my case three days, at Intergeo can appreciate the wide range of mobile mapping systems, and more specifically the portable and wearable variants. Handheld mapping does display traits of a hype, and is reminiscent of the industry buzz around uncrewed aerial vehicles (UAVs or ‘drones’) 10 to 15 years ago, when an entire exhibition hall was filled with UAV manufacturers. Once again, there are quite a few differences in quality. This plays a less important role in some applications than in others, but before purchasing a portable mapping system potential buyers should carefully consider, among other things, the selection of the sensors, their mechanical capabilities, the portal-to-portal communication with other ecosystems, and the workflow that comes with the acquisition of the data.
Perhaps unsurprisingly, RIEGL – which is widely recognized as having one of the most
An impression of this year’s Intergeo exhibition. (Image courtesy: Intergeo)
With over 17,000 visitors from 121 countries, the Intergeo exhibition demonstrates its international appeal and the stability of the industry despite global economic challenges. (Image courtesy: Intergeo)
sophisticated and best performing high-accuracy laser scanning systems – remains confident that quality will win in the end. Based on performance expectations, customers looking for a high-end solution know how to find the Austrian company. RIEGL aims to bring its renowned quality to the wearable and portable market by ensuring the hardware results in an instrument fit for the surveying purpose. But a good solution to capture high-end geospatial data also relies on the software. Additionally, a worldwide network of service points, where customers can count on first-level support in a local context, remains a very important pillar. And RIEGL ticks all three of these boxes.
Among the strong offering of mobile mapping solutions, the newly launched handheld reality capture solution from NavVis is attracting a lot of attention as an innovative handheld dynamic scanning system tailored for the architecture, engineering, and construction (AEC) sector, as well as for broader surveying applications. Felix Reinshagen, CEO and co-founder, is excited by the enthusiastic reactions, emphasizing that his company has successfully met the demand for an end-to-end reality capture solution applicable to various fields. “We believe this is a groundbreaking innovation that sets us apart from the numerous other handheld systems displayed on the exhibition floor at Intergeo,” he comments.
More accurate sensors
It is inevitable that some of today’s manufacturers and systems have a greater chance of long-term success in the geospatial world than others. According to Steve Woolven, president of Trimble Applanix, just as we have seen a massive consolidation in UAV manufacturers, the mobile mapping field will also become more mature. Customers will then briefly take a step back, as they will need a moment to consider which mapping solutions best suit their needs.
Woolven is an advocate of the crewed airborne space. Here he sees the demand for more accurate sensors, as the world needs more accurate data. “At Trimble Applanix, we are dedicated to simplifying our portfolio in the land market. This commitment is partly based on feedback we’ve received from the sector, and it is something that will greatly benefit our customers,” he notes. He has high expectations of sensor fusion: “Trimble is advancing sensor fusion technology, exemplified by our ability to enable surveying in downtown corridors with poor GNSS reception. This innovation paves the way for effective trajectory optimization in areas where GNSS is unreliable or absent, facilitating seamless surveying with any Lidar system and significantly enhancing the accuracy of Lidar point clouds.”
Other key takeaways
Another striking trend is the presence of companies that would normally only be seen at hydrographic and ocean technology events. Maritime Robotics, Seafloor Systems and Baywei are some such companies, while of course EvoLogics – a German specialist in underwater communication, positioning, navigation and monitoring, as well as an autonomous surface vehicle for survey and support operations – is present for this ‘home game’. Moreover, a company like CHCNAV is now also presenting solutions and systems for underwater surveying, and can even supply high-accuracy acoustic doppler current profilers (ADCPs) and USVs (underwater drones).
In my view, Earth observation (EO) deserves a bit more attention at the event. After all, the geospatial industry plays a big role in providing solutions for many challenges our society is facing. Yet in the press conference, there was no representative from the EO industry in the panel. This does not do justice to the geospatial sector’s role of mapping our world from space! Shouldn’t we give EO and satellite imagery a bigger stage? Perhaps the organizers of Intergeo will reconsider this for future editions.
The
road towards Intergeo
2025
While the above suggestion is hopefully regarded as constructive feedback when developing the programme for next year’s Intergeo, there is another aspect that would probably be appreciated by a significant group of visitors, and at the same time contributes to increased internationalization: the English-language offering. I continue to be surprised, after so many years of attending this highly esteemed annual event, by the fact that so many talks and presentations are held in German. Maybe this is something to evaluate for next year’s edition.
Let me conclude with a side note: I’m curious to know whether the carpet will return in the exhibition halls. On a practical level, it adds a bit of warmth and cosiness (not to mention underfoot comfort!) to this ever-enjoyable gathering of professionals from around the globe. Moreover, it lends a certain prestige that befits the status of the geospatial industry.
Intergeo 2024 brought many notable highlights, with a strong focus on new software developments displayed throughout the exhibition. (Image courtesy: Intergeo)
Key evolutionary trends in the past ten years
A decade of developments in Lidar technology
By Lars Langhorst, contributing editor, GIM International
Lidar technology has undergone significant advancements in the past ten years, revolutionizing applications in various fields such as surveying, forestry, urban planning and environmental monitoring. The evolution of Lidar can be categorized into several key trends. This article looks at the emergence of new Lidar technologies, applications for new technologies, improved computing power and next-generation artificial intelligence (AI), the broad range of Lidar solutions at both the high end and the low end of the market, and Lidar’s future potential.
The last decade has seen the introduction of several innovative Lidar technologies, including single-photon Lidar, multi-spectral Lidar, and frequency modulated continuous wave (FMCW) Lidar.
Single-photon Lidar: enhanced detection and sensitivity
Single-photon Lidar represents a significant advancement in Lidar technology, utilizing the detection of a single photon to identify objects. This innovative approach allows for much higher pulse repetition rates compared to traditional Lidar systems, which typically require the detection of hundreds of photons for accurate measurements. The capacity to operate
at such high frequencies enhances the system’s detection capabilities, particularly in challenging environments where light conditions may vary. One of the primary advantages of single-photon Lidar is its exceptional range and sensitivity, making it particularly suitable for applications like bathymetric mapping or detailed surface analysis in urban environments (Mandlburger, 2019).
This technology can penetrate through dense vegetation, allowing for better assessments of forest structure and health. Due to its unique capabilities and broad applicability, this technology is likely to see future innovations.
A previous stepping stone in Lidar’s evolution
Light detection and ranging (Lidar) technology also took a significant step forwards between 2005 and 2015, enhancing its precision, versatility and accessibility. Early in this period, the introduction of multi-return systems allowed for detailed vegetation mapping and improved terrain modelling by capturing multiple reflections from a single laser pulse. The miniaturization of Lidar sensors also progressed rapidly, enabling their integration into smaller platforms such as uncrewed aerial vehicles (UAVs or ‘drones’). This revolutionized remote sensing by providing high-resolution data from previously inaccessible areas. Innovations in waveform Lidar technology further improved data quality by recording the entire return signal, allowing for more detailed analysis of surfaces and objects. Additionally, advancements in GPS and inertial measurement unit (IMU) integration enhanced the accuracy of Lidar systems, making them essential tools in industries like construction, urban planning and autonomous vehicles. The period also saw a reduction in costs due to improvements in semiconductor laser technology and economies of scale, making Lidar more accessible for various commercial and research purposes. These innovations collectively transformed Lidar from a specialized tool into a widely adopted technology across multiple industries.
Multi-spectral Lidar: integrating structure and material properties
Multi-spectral Lidar combines traditional Lidar technology with multi-spectral imaging, allowing for the simultaneous capture of both the physical structure of objects and their material properties. This integration is a game-changer for environmental monitoring, urban planning and resource management. By incorporating spectral data, multi-spectral Lidar enhances classification tasks, providing detailed insights into various environmental conditions and the composition of different materials (Kukko et al., 2019). For instance, this technology can differentiate between tree species in forestry applications, or assess the health of vegetation by analysing spectral signatures. Additionally, the ability to capture a broader range of wavelengths enables the detection of subtle changes in the environment, such as pollution levels or changes in land use. Multi-spectral Lidar is increasingly being applied in precision agriculture, where an understanding of soil and crop conditions is essential for optimizing yields. Multi-spectral Lidar is now solving challenges in ways that were unthinkable some years ago. This shows the position this technology can take in the monitoring and sensor market, along with many other areas of business.
FMCW Lidar: revolutionizing measurement accuracy
Frequency modulated continuous wave (FMCW) Lidar operates by continuously
Unique to Lidar, views of jungles and forests do not end in the canopy. Penetrating Lidar technology can perform automated analysis of forest health.
emitting a laser beam and measuring the frequency shift of the reflected light, enabling precise distance measurements and high-resolution imaging (Van Rens, 2020). This continuous wave operation sets FMCW Lidar apart from traditional pulsed Lidar systems by enhancing measurement accuracy while reducing the complexity associated with conventional methods. As FMCW Lidar continues to advance, it holds the potential to redefine standards for measurement precision and operational efficiency across multiple industries. This technology is particularly promising for automotive applications, where real-time data processing is critical for safe navigation and obstacle detection. The ability to provide accurate, continuous data makes FMCW Lidar ideal for integration into autonomous vehicles, enhancing their situational awareness in dynamic environments. Additionally, the reduced complexity of FMCW systems facilitates their integration into smaller platforms, such as drones and compact vehicles, broadening their applicability in various fields, including urban mapping and infrastructure monitoring.
Applications of Lidar technology
The evolution of Lidar technology has significantly transformed its applications, driven by advancements in sensor miniaturization and integration. Smaller, more efficient Lidar sensors can now be incorporated into a variety of platforms, including handheld devices, drones and vehicles, making them suitable for challenging environments where space is limited, such as in cars and satellites.
Aerial mapping
The integration of Lidar sensors on UAVs has revolutionized aerial mapping capabilities. Drones equipped with Lidar can capture highresolution 3D point clouds quickly and cost-effectively compared to traditional crewed aircraft, unlocking a broad range of new applications for a safer and more efficient work environment.
Mapping in indoor or complex environments
Handheld Lidar systems utilizing simultaneous localization and mapping (SLAM) technology have gained traction for indoor mapping
and in complex environments, allowing for rapid data collection in areas that are difficult to access with conventional equipment (Quadros et al., 2021). These advancements are instrumental in creating detailed 3D models of urban environments, which support the development of smart city initiatives and digital twins. Such models enable city planners to visualize infrastructure, assess environmental impacts and optimize resource allocation.
Autonomous navigation
Lidar is being increasingly implemented in autonomous navigation –not only for self-driving vehicles, but also in the aerospace industry, where positioning needs to be highly accurate and independent of light conditions. The synergy of high-resolution Lidar data with AI-driven analysis allows vehicles to navigate complex environments safely and effectively (Van Rens, 2020). This convergence of technology not only enhances existing applications, but also paves the way for new innovations in diverse fields, including object detection, automated filtering and monitoring.
Archaeological discoveries
Recently, a lost Mayan city was found in the Mexican Yucatan by a student using open-source Lidar data. By detecting geometric shapes
Open-source Lidar data revealed the ancient houses and structures of a lost Mayan city that had lain hidden for more than a thousand years.
underneath the jungle canopy, the student was able to spot a hidden city of hundreds of buildings and a complex network of roads. The movement towards open data together with the growing power and availability of AI opens the door for many more discoveries to be made.
Computing power and AI
The last decade has seen enormous improvements in the efficiency of Lidar data processing and delivery, largely driven by advancements in computing power, particularly through cloud computing and automation.
Advancements in data processing and delivery
Innovations in automated workflows have made the process of handling Lidar data much faster and more efficient. Modern systems can automatically align point clouds and perform quality assurance, drastically reducing the time it takes to convert raw data into usable formats. This means that clients can now access accurate information much more quickly, speeding up the decision-making process (Quadros et al., 2021).
The rise of cloud computing has revolutionized the way Lidar data is handled. Field teams can now upload data
to the cloud, where it is processed in real time. This capability is particularly valuable for critical applications such as disaster response and urban planning, where having the most current information can be essential for making timely and informed decisions (Kukko et al., 2019).
Cloud computing is growing every year. It truly pushes each boundary of what was previously thought impossible and, with it, the applicational boundaries of data sources. For example, cloud computing –especially in combination with AI (see below) – holds interesting potential for dense
Cloud computing grows yearly, pushing the boundaries of what was once thought impossible along with the applicational limits of data sources
point clouds, for which operations are traditionally costly.
Artificial intelligence and Lidar
The integration of artificial intelligence (AI) into Lidar technology has transformed its applications and overall effectiveness,
Advancements in sensor miniaturization and integration mean that Lidar sensors can now be incorporated into smaller platforms such as satellites. NUVIEW aims to establish the foremost commercial satellite constellation, dedicated to annually mapping the entire land surface of the planet, utilizing advanced Lidar technology. (Image courtesy: NUVIEW)
Envisioning a future of smart cities where Lidar will play a crucial role in accurate object detection, monitoring, autonomous driving and more. (Image courtesy: Vortex)
High-end vs low-end Lidar solutions
In the last decade, the Lidar market has expanded significantly, creating a diverse range of solutions that cater to both highend and low-end applications. High-end Lidar systems, such as those from Velodyne and Leica Geosystems, have seen remarkable advancements, delivering improved quality and precision essential for applications in autonomous vehicles, aerospace and detailed topographic mapping. Simultaneously, the proliferation of cost-effective, low-end Lidar solutions has democratized access to this technology. Devices like the LeddarTech LeddarVu and the Ouster OS1 are examples of compact, affordable Lidar options that provide high-resolution data collection for applications in agriculture, environmental monitoring and urban planning. This dual advancement has not only improved the quality of results in high-end applications, but has also empowered users in various fields to utilize Lidar for practical, real-world applications.
marking a significant evolution in the field of remote sensing. AI algorithms enhance Lidar data processing by automating tasks such as feature extraction, classification and anomaly detection, thus significantly increasing efficiency and accuracy. For instance, machine learning techniques enable rapid classification of point clouds, allowing for the identification of features such as buildings, vegetation and infrastructure. This capability is crucial for applications in urban planning, environmental monitoring and disaster management (Van Rens, 2020).
Moreover, AI-driven data mining techniques leverage high-density point cloud data to forecast environmental changes and assess human impacts, facilitating more effective resource management (Kukko et al., 2019). The ability to analyse vast amounts of data in real time enhances decision-making processes in critical scenarios, such as emergency response efforts, where timely information is paramount.
Artificial intelligence outperforms the processing of point clouds by humans, and will soon be able to better filter and classify dense point clouds than any written function or algorithm. As these technologies continue to evolve, the cost of Lidar solutions is expected to decrease, broadening their accessibility across various industries and encouraging widespread adoption. This convergence not only enhances the capabilities of existing systems, but also opens up new avenues for innovation and application, shaping the future landscape of spatial analysis and remote sensing. The continued development and integration of AI with Lidar promises to unlock unprecedented potential, paving the way for smarter cities, improved environmental stewardship and more efficient resource management.
The future of Lidar
The last decade has witnessed remarkable advancements in Lidar technology, transforming its applications across various fields, including surveying, urban planning and environmental monitoring. Key developments such as single-photon, multi-spectral and FMCW Lidar have enhanced detection capabilities, measurement
About the author
Lars Langhorst (MSc) has a background in geomatics, and is now focusing on digitalization in the world of asset monitoring. At Sweco, Lars is working on digital solutions for the surveying industry, aimed at using data to help engineers make every decision a data-driven decision.
accuracy and the ability to capture detailed material properties. The integration of smaller, more efficient sensors into drones, handheld devices and vehicles has democratized access to Lidar, enabling its use in challenging environments and opening new avenues for innovation. Furthermore, improvements in computing power, particularly through cloud technology and AI integration, have revolutionized data processing, allowing for real-time analysis and faster decision-making.
The future potential of Lidar technology is vast and promising, driven by continuous advancements in sensor accuracy, miniaturization and data processing capabilities. Emerging applications in autonomous vehicles, smart cities and environmental monitoring are set to benefit immensely from Lidar’s ability to provide high-resolution, real-time 3D mapping. As artificial intelligence and machine learning algorithms become more sophisticated, they will further enhance Lidar’s data analysis, enabling predictive modelling and automated feature extraction. Additionally, the integration of Lidar with other technologies such as GPS, drones and the Internet of Things (IoT) will create more comprehensive and versatile systems, expanding its use in sectors like agriculture, disaster management and logistics.
With ongoing research and development, the cost of Lidar systems is expected to decrease, making them more accessible and driving innovation across various industries. This evolution promises to unlock new possibilities for precision and efficiency in numerous applications. As a result, Lidar is poised to play a pivotal role in shaping smart cities, improving resource management and enabling groundbreaking discoveries.
Further reading
1. Quadros, N., Weigler, B., & Chambers, B. (2021). Recent trends in four Lidar technologies. GIM International
2. Van Rens, J. (2020). The future of Lidar is critical to the future of our world. GIM International
3. Mandlburger, G. (2019). Recent Developments in Airborne Lidar. GIM International
4. Kukko, A., Kaartinen, H., & Hyyppä, J. (2019). Technologies for the Future: A Lidar Overview. GIM International
5. Lost Mayan city found in Mexico jungle by accident: https:// www.bbc.com/news/articles/crmznzkly3go
Urban growth and its environmental ripple effect
Using Lidar data to establish the Ecological Index in urban Belgrade
By Malek Singer, Teledyne Geospatial
Against the backdrop of a 6% expansion in non-porous surfaces, a project in Belgrade analysed the potential for increasing the Ecological Index through various greening scenarios. A combination of aerial photography and Lidar scanning technologies was used to collect and process high-quality spatial data as the basis for generating a full 3D digital twin of the city infrastructure and trees.
Today, urbanization – the rapid expansion of cities – is a global phenomenon that we anticipate and embrace as an inescapable reality. Characterized by notable advancements in technology and infrastructure, increased job prospects and enhanced transportation and communication systems, the transformations all appear advantageous and even desirable to the populace in urban areas. However, beneath the glamour of urbanization, it’s crucial to acknowledge its environmental consequences and how they affect the residents of these rapidly evolving urban landscapes.
In 2022, five experts in landscape architecture, green infrastructure, urban planning and environmental
protection published a report of findings and suggestions defined in their project titled ‘Green Infrastructure in a Compact City – Ecological Index as an Instrument of Resistance to Climate Change’. Centered around Serbia’s capital city, Belgrade, the research investigated the escalating builtup areas in Belgrade over the past two decades. The increase was 6% – a number which at first glance seems insignificant. However, behind the single-digit figure lies a noteworthy expansion of nonporous territories by over 4,400 hectares. Non-porous surfaces such as asphalt, concrete or cement prevent rainwater from recharging the groundwater. They also retain heat, making urban areas warmer than their surroundings. This increase in non-porous surfaces has predominantly occurred at the expense of agricultural land
and natural ecosystems. This is impacting public green areas within urbanized parts of Belgrade municipalities, and contributing to issues like urban heat islands, flooding and a decline in biodiversity.
Ecological indicators
The project’s methodology centered around the use of the Ecological Index (EI) as a practical tool for enhancing urban greenery on plots of different land use. Also known as an ecological indicator or environmental index, the EI serves as a numerical metric to evaluate the quantity and quality of vegetation within city plots, thereby assessing their ecological significance and contribution to residents’ quality of life. These indices are typically derived from a set of physical and ecological inputs like landcover, building heights, tree crown cover
Figure 1: A bird’s-eye view of Belgrade, next to a map of the city showing ground porosity. Green represents porous surfaces (e.g. vegetation, soil, etc.). Orange represents less porous surfaces (e.g. asphalt, concrete, etc.).
Figure 2: The objects of interest were 3D objects, trees and land cover. LOD2 rules were used for the buildings in the project. It was possible to extract these attributes thanks to the Lidar sensor’s high resolution, and the oblique look on building walls.
and more parameters that provide information about the state of the environment, such as flood resilience, biodiversity, water quality, air quality, habitat integrity and other relevant factors.
Evaluating five scenarios
The project conducted a comprehensive pilot study across six locations in Belgrade to analyse the potential for increasing the EI through various greening scenarios. The following five distinct scenarios were evaluated, ranging from minimal ground-level planting to comprehensive rooftop and vertical wall greening, with the aim of maximizing ecological functionality and EI:
Scenario 1 considered all vegetation forms and ecological functional spaces (EFP) on the parcel, including ground level, building vertical surfaces and roofs.
Scenario 2 simulated a design with minimal additional ground-level planting (improved on Scenario 1).
Scenario 3 added green walls on suitable vertical surfaces without structural changes (improved on Scenario 2).
Scenario 4 simulated greening building roofs without structural changes (improved on Scenario 3).
Scenario 5 simulated new construction per planning documents, with structural changes to maximize EFP and EI.
The presentation of the existing condition and EFP on the pilot area parcels, as well as the evaluation of the existing EI, was based on collected spatial data. To perform the evaluation according to predetermined criteria, it was necessary to vectorize 3D objects, trees and land cover using georeferenced point clouds and aerial photographs.
Point cloud classification
Before vectorization, the point cloud was first classified into terrain points, vegetation (low, medium, high), buildings and other content. Each 3D object consisted of a base, exterior walls and a roof, represented by closed polygons. The level of detail used was LOD2. Roofs were represented by surfaces following their construction, shape and slope, exterior walls were represented by vertical surfaces, and bases were represented by closed horizontal polygons. Roof geometry was used to calculate area, aspect and slope, which were added as roof attributes for analysis and selection.
Trees were represented by two layers: trunk (3D point) and canopy (circle). For each tree entity, alphanumeric attributes were calculated from the classified point cloud geometry, including tree height, base elevation, top height, projected surface area and diameter, with the canopy including surface, top and diameter information. Land cover was represented by closed 3D polygons for each type, such as soil, shrubs, turf, concrete, asphalt and artificial grass. These polygons facilitated the application of EI assessment criteria and simplified calculations and element relationships.
Leveraging Lidar
The primary source of data for the study was gathered by aerial imaging and Lidar scanning, while field methods were used as auxiliary sources. Lidar technology is increasingly being used as a tool in ecological studies, providing unparalleled data precision for characterizing the shapes, size and colour of natural and built environments. Lidar data can model the city floor, extract 3D models of buildings, compute roof slopes and extract tree height and crown size, all of which are used in calculating the EI.
In the study of six locations in Belgrade, Lidar data was leveraged to construct an EI that served as a comprehensive measure of the ecosystem’s health. Various processing methodologies were employed in utilizing Lidar technology to derive essential metrics and subsequently formulating an index for ecological assessment.
Quantification and scenario modelling
MapSoft, a geomatics company based in Belgrade, specializes in
Figure 3: The Optech Galaxy in a gyrostabilized mount, and an RGB NIR camera.
Figure 4: A thematic map derived from Lidar and imagery showing boundaries, building footprints, roof slope, tree crowns, and ground material and porosity.
executing projects related to software development and spatial data collection. The firm’s expertise lies in utilizing a combination of aerial photography and Lidar scanning technologies to collect and process high-quality spatial data. By applying these technologies for the needs of the project, MapSoft was able to generate a full three-dimensional digital twin of the city infrastructure and trees. Through that quantification and scenario modelling (e.g. what if more trees were planted, with a certain height, in a certain location?), it would ultimately be possible to help decision-makers to develop policies aimed at improving the urban quality of life and diminishing air pollution. Examples could include urban planning bylaws that would maintain a minimum number of trees in future developments.
To provide answers, MapSoft used the Teledyne Geospatial Optech Galaxy on a fixed-wing aircraft to map the entire city. This produced a map of urban landcover, 3D building models and footprints, rooftops classified by slope, and tree locations and crowns. The analysis leveraged the richness in the data to derive the EI, describing the health of a city plot.
MapSoft was able to provide the aerial imaging and Lidar point cloud of the area. The images had a spatial resolution of 5cm, while the density of the point cloud was 25pts/m2. Specialized software for spatial data processing and manipulation was used for vectorization. Field methods were deployed to supplement aerial mapping with ground-based spherical images as well as control points. The data product delivered was in accordance with the required standards.
Given the dense urban area with tall buildings, it was necessary to collect detailed data within the building blocks, ensuring all blind spots were covered. Therefore, the flight was conducted at a lower altitude, utilizing the Optech Galaxy Lidar scanner to select the optimal field
About the author
Malek Singer is the airborne product manager at Teledyne Geospatial, where he is focused on the research, development and productization of innovative geospatial acquisition technology, especially Lidar.
Figure 5: An Ecological Index map indicating city plots that have more trees (green) and ones that are mostly unvegetated (grey).
of view and signal strength for better penetration through vegetation and smaller openings. In consultation with MapSoft, the project team concluded that the collected spatial data (ground sampling distance [GSD] of 5cm/pixel, point cloud density of 25pts/m²) would be of sufficient quality to generate geometric elements (with all associated attributes) for a realistic evaluation of the existing EI. All the collected data was verified through field checks.
Conclusion
MapSoft’s work with landscape architects has been crucial for completing various environmental projects, such as GIS mapping of city biotopes, tracking local pollutants and documenting urban green spaces. This project was a pioneering effort in local urban planning and management, requiring the expertise and equipment that MapSoft could provide, proven through multiple airborne laser scanning initiatives. The case study highlights the role of green infrastructure and the Ecological Index in enhancing urban resilience to climate change, as well as the contribution of Lidar technology to ecological assessments. By incorporating ecological considerations into urban planning, policymakers in Belgrade can address environmental challenges and improve the quality of life for the city’s residents.
Further reading
Ecological
‘Green Infrastructure in a Compact City –
Index as an Instrument of Resistance to Climate Change’, by Anica Teofilović, Andreja Tutundžić, Vesna Šabanović, Katarina ČavićLakić and Bojana Jovanović
Smart digital reality takes centre stage at Hexagon Live Brussels
By Wim van Wegen, GIM International
This year’s Hexagon Live Brussels demonstrated how digital twin technologies are being applied in infrastructure development, defence and many other sectors, according to Wim van Wegen, GIM International’s head of content. Here, he shares his findings from the two-day event, which brought together industry leaders and experts in the city’s historic Maison de la Poste venue to explore the latest developments in smart digital reality.
One highlight was the panel discussion featuring Frank Tierolff (Kadaster), Rink Kruk (National Geographic Institute Belgium), Rob van de Velde (Geonovum) and Nick Chorley (Hexagon). They explored a key issue driving advancements in our industry: how to effectively leverage the vast amounts of geodata. As the massive geospatial data lake grows increasingly unmanageable, the challenge lies in figuring out how to help people make meaningful use of it.
As Kruk noted: “Governments need to master their own data, but the commercial sector has a very important role to play too, namely facilitating the data.” Van de Velde added that the public sector is increasingly taking on the role of servicing the data. Unsurprisingly, the impact of artificial intelligence (AI) was another major topic throughout the two days, including in the panel discussion. “Regulation is always lagging behind practice,” Tierolff remarked on AI governance.
The influence of Pixar
It was intriguing to discover how geospatial technologies – such as sensors and point cloud imagery – are being utilized in unexpected areas, beyond the traditional geospatial sector. For example, the event provided interesting insights into the structure of a defence geoportal.
Another standout moment for me was the second panel discussion, which focused on 3D digital twins using technologies from Microsoft, NVIDIA and Hexagon. NVIDIA’s Gabriel Vincent highlighted the company’s pivotal role in advancing digital twin technology when he discussed universal scene description (USD). Originally developed
by Pixar Animation Studios, which has a rich history of working with intricate, distributed scene graphs for its award-winning animated films, USD can be thought of as the HTML of 3D – a foundational standard that could revolutionize how we build and communicate in digital environments. As a result, this represents an exciting step towards establishing a unified language for digital twins.
Driving innovation forward
Last but not least, it is worth mentioning the impressive 25-year journey of Luciad, which has been part of the Hexagon family since 2017. Since being founded in 1999 as a spinoff from the Catholic University of Leuven, Luciad has evolved into a leading platform for advanced geospatial data visualization and real-time situational awareness. Now integrated within Hexagon AB, the team is driving innovation forward, developing faster and more powerful solutions to meet the industry’s ever-evolving needs.
Before attending his engaging keynote, I had the pleasure of sitting down with Frank Suykens, who has played a key role in Luciad’s growth since joining in 2003. During our conservation, he told me – among many other things – about bringing different worlds together. He explained that the Luciad software was initially mainly focused on aerospace and defence, but that a growing part of it now runs through the entire Hexagon portfolio. Relevant applications include digital twins for cities – with cities that are sustainability pioneers leading the way. Within Hexagon, it is a firmly held belief that geospatial technologies can be an enormous help in the energy transition and dealing with
climate change, so it is important to consider which new or existing business models support this.
In his subsequent keynote, Suykens explained how HxDR and Luciad are driving the Reality Cloud Studio solution together, enabling seamless connections between people, projects and reality capture data. The approach emphasizes easy data integration, eliminating the need for preprocessing while relying on standards-based solutions. Most importantly, it ensures the data is presented in a visually functional, accessible and also attractive manner. This enables the use of visual analytics to better understand complex solutions.
One of the captivating panel discussions that took place over the course of the two-day event.
Open mapping as the key to bridging gaps
Informal settlements in Fiji: spatial data challenges and solutions
By Mohsen Kalantari, Digby Race and Joeli Varo
The urbanization trend is driving the growth of informal settlements in large cities around the world, and Fiji is no exception. To address the challenges associated with such settlements, the Fijian government has introduced a formalization process of land leases, but there are still some key gaps in the spatial data. As a solution, the authors propose investing in capacity-building initiatives to leverage existing open mapping infrastructure.
In large cities within developing countries, informal settlements are becoming increasingly prevalent. Due to a lack of development and modern facilities in rural areas, a growing number of people are migrating and resettling in urban areas, seeking better opportunities in terms of education, employment and services more commonly found in urban centres.
Many new urban residents end up living in informal settlements, as these offer much more affordable housing options than more formal or freehold arrangements. The settlements develop in an ad-hoc process over time, and housing is often noncompliant with building standards for health and safety. The administration of these settlements poses significant challenges, influenced by various internal and external factors. Internal factors encompass the land administration system, governance and tenure, planning and financial policies, civil infrastructure, and capacity issues. External factors include climate change, natural disasters, economic shocks leading to financial crises, and health risks such as the COVID-19 pandemic.
Three types of land tenure
Fiji is one example of a country where informal settlements have been increasing as a result of this ongoing rural-to-urban shift, to the extent that they have become a persistent national issue. Like many former colonial Pacific Island countries, Fiji’s land tenure system has evolved into a multiple
system which now has three types of land tenure: iTaukei (or customary) land, freehold land, and state (or ‘crown’) land:
• iTaukei land makes up 91% of Fiji’s land and is owned by native Fijians on a communal basis. Rather than having a legal title to land, each individual iTaukei land owner is registered as a member of a specific Mataqali (meaning ‘clan’), which is the legal land-owning unit in Fiji. They then have communal guardianship for life.
• Freehold land accounts for just 6-7% of the land in Fiji. It is land appropriated or purchased by Europeans before the Cession and is a fee-simple estate.
• State land makes up the remainder of the land. It is under the custodianship of the government.
Challenges faced by informal settlers
Most informal settlements in Fiji are located within urban or peri-urban areas on the edge of cities. Housing quality can vary between and within these areas, and so too can the access to services.
As with informal settlements in other countries, however, these areas are often characterized by congestion, poor housing quality, lack of proper sanitation, and limited access to essential services. Many of Fiji’s informal settlements are on state land. However, some people live in tenement housing on iTaukei land under informal arrangements with traditional landowners.
not compliant with engineering standards.
In a practice known as vakavanua, the iTaukei land is subdivided, and housing is established without a formal procedure or documentation.
Whether living on state land or iTaukei land, informal settlers lack legal recognition and tenure security. Inequalities in land
and revenue distribution associated with informal arrangements weaken individuals’ resilience, and stand in the way of improving their housing conditions and access to basic supplies like electricity and water.
Government initiatives
To ensure stability and security in informal settlements, the government has introduced a formalization process of iTaukei and state land leases. The formalization involves identifying the land required to go through the process. This may involve fieldwork, surveys, community consultations, reviewing old records and obtaining information about the current property leases. Following this step, negotiations and consultations are conducted to obtain consent from the land-owning units. This is aimed at ensuring a mutual agreement is reached between parties, addressing all terms and conditions. Such conditions can include the length of the lease agreement, the rent amount, and how the land will be utilized.
Once the parties have reached an agreement, the lease details are recorded in a legally binding lease agreement. In the case of iTaukei land, a lease document is also issued by the iTaukei Land Trust Board, which is the statutory body responsible for
administering, developing and managing the land on behalf of customary land owners. The Fiji Lands and Surveys Department records this registration, giving all parties legal status and protection. To ensure that conditions are followed, the parties involved in the binding agreement are responsible for regularly monitoring and evaluating the formalized land leases.
Addressing spatial data gaps about informal settlements
Since informal settlers lack the legal authority to inhabit the property, land-related and geospatial information on the settlements often has quality issues, if it exists at all. For the informal settlements of Fiji, cadastral boundaries, the iTaukei Land Trust Board reserves housing point data, utility meter points, and digital elevation models exist for the settlements that have yet to be formalized. However, there are significant gaps in the settlement data, including about infrastructure such as roads and footpaths, and the location of utility networks such as water pipes and electricity supplies to houses/shelters. There is also a lack of information about housing types and building codes, as well as hazard maps for disaster rehabilitation and planning. Moreover, the data is not sufficiently integrated because the datasets are maintained by multiple organizations, and appropriate and systematic data integration approaches are not followed.
Rather than relying on the government authorities to address these spatial data gaps alone, mapping partnerships can be fostered between the government and informal settlement communities by investing in capacity-building initiatives. The generation, management and utilization of geospatial data for informed decision-making and inclusive development can be enhanced by leveraging existing open mapping infrastructure.
To test this approach, in partnership with the Muanivatu informal settlement community, the authors mapped shelters and houses, including images, footpaths and drainage, using the publicly available Open Street Map platform. The results demonstrate that this approach could be built into the local government’s current arrangement of regular socio-economic surveys conducted by the informal settlement communities. Collection and regular updating of spatial data could be achieved by extending these surveys to include land-related and geospatial information about the settlement.
About the authors
Mohsen Kalantari is an associate professor of Geospatial Engineering and lead of the Digital Twin and Land Tenure Lab at the School of Civil and Environmental Engineering at UNSW Sydney, Australia. His research areas cover cadastre, land administration, building information modelling (BIM) and 3D GIS.
Digby Race is a professor of Land Management at the University of the South Pacific. He has professional experience in land management, property development and community-based development in the wider Asia Pacific region.
Dr Joeli Varo is the head of the Discipline for Land Management and Development at the University of the South Pacific, Suva, Fiji Islands. He has accumulated a wealth of experience in the public and private sectors, and is an expert in GIS and remote sensing in the Pacific region.
Figure 3: A photo of a community church and its representation in Open Street Map.
Inside the latest Sonobot 5 upgrades
The EvoLogics Sonobot 5: new capabilities
EvoLogics is an innovative subsea technology company established in 2000, with headquarters in Berlin, Germany, and a US sales office in Yorktown, Virginia. Combining state-of-the-art engineering with bionic concepts, EvoLogics is an expert in underwater acoustic communication, positioning networks, smart robotics and sensor systems.
The compact Sonobot 5 uncrewed surface vehicle (USV) from EvoLogics has established itself as one of the company’s core products. Available since 2011, today the Sonobot vehicle offers a wide range of hardware options that cater to most missions.
The Sonobot 5 platform
The Sonobot was developed as a compact, easy-to-deploy USV for bathymetric surveys in inland and harbour waters. The vehicle’s designers aimed to bridge the gap between manual measurements with handheld tools and the use of large crewed or uncrewed vessels. The result is a user-friendly, streamlined survey platform with multiple configuration options to best-fit a particular data collection scenario.
Sonobot 5 is the current iteration of the vehicle. A stable twin-hull catamaran craft, it is foldable for convenient transportation and
can be deployed by a single operator. The vehicle’s autopilot system enables waypointbased navigation along a preprogrammed survey grid. A radio control unit provides the option for either manual operation or supervised autonomy, where the operator can override the autopilot as required.
A wireless shore station enables on-demand or real-time data acquisition during the mission, and the system can be fitted with various WLAN and cellular communication options, depending on the end user’s operational requirements.
The sonar options include a single-beam echosounder, a multibeam sonar, a sidescan sonar and a forward-looking sonar for various data acquisition needs. The vehicle is available with both over- and underwater cameras for monitoring and capturing visual data of the surveyed areas.
Advanced options include an AI-powered object recognition module – this configuration enables riverbed and seafloor sidescan sonar imaging enhanced with automatic detection of objects, and a collision-avoidance system with a stereo camera for better navigation around obstacles.
EvoLogics continues to advance the system’s hardware and software, with new configurations for the Sonobot platform in the pipeline for the upcoming year.
Novel applications of the Sonobot 5
Over the years, EvoLogics has explored novel applications for the Sonobot vehicle beyond its standard roles in hydrographic surveys and monitoring. Back in 2015, the team was preparing for a demonstration of an LBL (long-baseline) acoustic positioning system at a conference in Genoa, Italy. The team faced a challenge, common at fast-paced industry events: system deployment, calibration, testing and demonstration all had to be performed within two hours.
A typical LBL underwater acoustic positioning system uses an array of at least three seafloor-mounted transponders that form the system’s baseline, and before putting the system into operation, these baseline nodes must be carefully located in absolute coordinates. The calibration procedure involves moving a boat with an acoustic transceiver and a DGPS receiver over the deployed baseline array to test the acoustic connection to baseline transponders and accurately establish their positions on the seabed – a time- and effort-consuming procedure.
Sonobot 5 with multibeam sonar.
In preparation for the demonstration in Genoa, EvoLogics came up with the idea to use a remotely-controlled Sonobot 3, the generation of the platform at the time, to perform LBL calibration. An acoustic modem with a USBL (ultra-short baseline) antenna was towed behind the Sonobot, and, over the vehicle’s WLAN connection, successfully transmitted the established positions of LBL seabed nodes to the EvoLogics positioning software, SiNAPS, running onshore.
Using a USV within the infrastructure of an LBL positioning system proved a very promising application of the Sonobot. The team invested further efforts into hardware integration of an underwater acoustic modem with the Sonobot platform, and into automating acoustic communication tasks with the Sonobot’s autopilot functionality.
In 2016/17, the team designed and built a prototype Sonobot 3 with a USBL modem, integrated into a retractable submergible pod of the vehicle. This prototype aimed to explore new possibilities of combining the USV’s autonomy and mobility with applications that require both an acoustic link to underwater assets and a wireless connection to the onshore control station.
2017 also saw the EvoLogics trials of the bionic Manta Ray robots, developed by EvoLogics within Germany’s state-funded
three Manta Ray unmanned underwater vehicles (UUVs), each equipped with an acoustic USBL modem, and ran test missions of the robots acting in a cooperative swarm.
The Manta Rays were set to submerge and start moving in one direction as a formation, whereas the Sonobot 3 with USBL was programmed to automatically follow the Manta Rays by tracking their positions over the acoustic link. The Sonobot exchanged positioning data with the Manta Rays and transferred it to the onshore station over WLAN. Fusing the acoustic positioning data with the Sonobot’s GNSS positions delivered positions of the UUVs in absolute coordinates.
The initial Sonobot 3 + USBL prototype did not see a commercial release, and the platform’s development direction went towards a complete design overhaul that resulted in today’s successful Sonobot 5.
Sonobot 5 with USBL
The lessons learned from the integration of the USV with a USBL modem were not shelved indefinitely, and in 2024, EvoLogics is revisiting the Sonobot + USBL concept.
A new Sonobot 5 prototype with a USBL antenna was recently built by the robotics team. It features a streamlined moulded USBL modem, fitted on a motorized arm that lifts and submerges the USBL unit, controlled by the vehicle’s onboard software.
The team foresees several target applications for the Sonobot 5 USBL configuration. For example, EvoLogics’ recently introduced Diver Navigation System is currently available with a USBL buoy as a stationary surface node for linking the diving team to the shore. The Sonobot with a USBL antenna will make this setup mobile, as the vehicle can follow a team of divers and make it possible to move between areas of interest without the need to manually reposition and redeploy the surface node.
Another application for the Sonobot + USBL is providing an autonomous mobile surface link to data-collecting underwater vehicles, such as the EvoLogics Quadroin AUVs, moving along the target areas at greater depths.
EvoLogics is dedicated to further optimizing underwater operations with integrated systems that offer a synergy of technologies. The functional prototypes of Sonobot 5 + USBL are to undergo extensive testing over a series of field trials, and the system is set to enter the market in 2025.
Sonobot 3 with USBL following the BOSS Manta Ray AUVs, 2017.
Sonobot 5 with USBL, 2024.
Sonobot 3 with USBL during the BOSS Manta Ray trials, 2017.
Integrating geological sampling and topobathymetric monitoring data for resilience planning
Mapping Chicago’s beach and nearshore sand distributions for effective management
By Robin Mattheus, Mitchell Barklage and Liane Rosario, Illinois State Geological Survey, USA
Lake Michigan water levels fluctuate by as much as two metres over decadal time spans, altering how waves and currents interact with coastal sediments and lakefront infrastructure in ways not yet fully understood. Understanding sediment transport dynamics is foundational to effective coastal resilience planning. This article shares insights from an offshore sand assessment and beach topobathymetric monitoring study of the greater Chicago coast, a collaboration with the Illinois-Indiana Sea Grant and Illinois Coastal Management Program.
Semi-periodic fluctuations in Lake Michigan’s water level, measured in metres at the decadal scale, combine with storms to create a variety of coastal management problems along the greater Chicago lakefront. There are around two dozen engineered beaches along this ~40km-long stretch of coast, which provide recreational opportunities to a metropolitan population approaching ten million. Vegetated dune terrains are also found within this urban lakefront landscape. These beach areas are proven to be ecologically important as a bird nesting habitat, including for the threatened Great Lakes piping plover, and as a general migratory stopover site. Changes in lake level, which alter sediment transport dynamics along the coast, also factor into other management challenges. These include the shoaling of marine navigation and harbour approach channels and lakebed downcutting, which threatens lakefront infrastructure integrity. Despite the importance of understanding coastal sand transport patterns along heavily urbanized portions of the Great Lakes coast, little work has gone into mapping offshore sand distributions and monitoring geomorphic changes close to shore. Such efforts must be at the core of coastal resilience and other management endeavours.
The Illinois State Geological Survey’s Coastal Geology Group (ISGS-CGG) has been working closely with the Illinois Coastal Management Program, the Chicago Park District and the National Oceanic and Atmospheric Administration’s Illinois-Indiana Sea Grant Office since 2020 to improve geological maps and geomorphic models for the SW Lake Michigan coast. The aim is to couple an improved understanding of offshore sand occurrence to beach
geomorphic response to lake-level variances and punctuated storm events. This requires onshore and offshore sampling to characterize materials, monitoring beach and nearshore elevations to quantify changes, and subsurface imaging for sand volumetric assessments.
Lake-bottom geology
To confidently map lake-bottom geology and quantify offshore sand volumes at
Figure 1: Aerial drone images of the Montrose Beach Dunes Natural Area, Chicago (an important niche habitat for the endangered Great Lakes piping plover) in 2020 and 2024, highlighting geomorphic patterns related to rising and falling lake-level conditions.
2: GIS maps showing 2020 topobathymetric coverage of beach and nearshore areas along the Chicago coast of Lake Michigan, with inset maps showing select features of interest and the ISGS-CGG approach to geological monitoring of shoreline environments.
About the authors
Robin Mattheus is a coastal geologist with the Illinois State Geological Survey that works closely with the Illinois Coastal Management Program and the Illinois Sand Management Working Group. His current study focus is on linking coastal geomorphic development to framework geology, sand supply, human disturbances and lake hydrodynamics. Research efforts currently underway along the ~100km-long Illinois coast of SW Lake Michigan are aimed at providing data-driven insights into coastal morphodynamics to lakefront managers and various other stakeholder groups.
a regional scale, the ISGS-CGG collected marine ‘chirper’ seismic reflection data, sediment cores and sediment grab samples in the summers of 2022 and 2023. Offshore geologic units differentiated based on this data were traced regionally using 2020 Lidar-derived DEMs and slope aspect models, which showed close agreement between (1) surface and subsurface geological sample information, (2) geophysical data resolving the ‘base of sand’ subsurface reflection, and (3) the relative smoothness of the lake bottom. In particular, the difference between sandy substrate and non-sandy substrate is manifested as a contrast between smooth and rough lake bottom ‘texture’. It is important to note that all data was acquired within a relatively short period (2020–2023), providing a ‘snapshot assessment’ of sand distribution across the Chicago littoral sand transport zone. Non-sandy nearshore substrates consist of outcrops of bedrock, many recognized by the marine biological community as ‘reef-style’ fish spawning sites, and undifferentiated sediments consisting mainly of muddy gravel-cobble deposits.
GIS-based integration of multiple corroborating surficial and subsurface geological datasets, acquired within a few years, has provided an unparalleled blueprint for regional, high-resolution delineation of nearshore sand bodies from Lidar-based bathymetric terrain models. Knowing offshore sand extent and thickness allows us to understand the role of infrastructure and bedrock morphology on alongshore sediment routing and sand sequestration patterns. This data also informs recent geomorphic patterns along the coast, including that within urban lakefront embayments containing engineered pocket beaches. Chicago’s urban beaches are shown to benefit to varying degrees from their coupling to the offshore sand resource.
Mitchell Barklage is a geophysicist with the Illinois State Geological Survey and a member of the Earth Characterization Center. In addition to coastal and marine studies, he applies near-surface geophysical imaging techniques to a variety of projects including but not limited to groundwater management, underground carbon storage and astrobiology.
Liane Rosario is a geospatial data analyst with the Illinois State Geological Survey with an emphasis on shallow subsurface imaging technologies and geological sampling. Most recently, she deployed ground-penetrating radar to create sand thickness models and other interpretive maps that characterize the subsurface of the SW Lake Michigan coast.
Nearshore topobathymetric changes
United States federal agencies, particularly the United States Army Corps of Engineers and National Oceanic and Atmospheric Administration, strive to acquire topobathymetric Lidar data along the Illinois coast of Lake Michigan around twice per decade. This does not adequately capture nearshore and beach geomorphic changes at a sufficiently high resolution to provide coastal managers with useful information on developmental trends. While 2008, 2012 and 2020 Lidar datasets provided insights into geomorphic patterns and sand volumetric changes associated with decadal lake-level rise from 2013–2020, the absence of
Figure
such federal information since 2020 has left beach managers in Chicago with little information on developments during beach re-exposure and recovery with lake-level fall. To increase the temporal data resolution, the ISGS-CGG started supplementing federal datasets with annual topobathymetric surveys at priority sites in Chicago. These efforts, underway since 2020, involve deployment of singlebeam sonar across nearshore waters >0.5 metres in water depth, wading surveys between that coverage and the shoreline, and beach topographic assessments using small aerial drones. All geospatial data acquisition methods integrate real-time kinematic positioning technologies for cmscale precision in horizontal and vertical dimensions.
Repeat topobathymetric data coverage of the urban coastal environment, at different lake-level positions, provides the basis for investigating patterns of sand redistribution and the impacts thereon of winter ice and storms. The recent period of lake-level rise, from a historical low in 2013 to a level >1.5 metres above in 2020, was associated with coastal inundation, beach shoreline retreat and sediment accretion along backshore regions. This general geomorphic trajectory, under rising water-level conditions, was ubiquitous to all Chicago area beaches. However, some notable differences in geomorphic response were noted. These included
different degrees and directions of beach rotation, as related to influences of the surrounding infrastructure, embayment orientation and beach length. Changes across the inundated portions of coastal embayments/beaches and the nearshore, mapped by federal and ISGS-CGG datasets, are more highly variable. Patterns of change here are not easily recognized by coastal managers, but they are important to the continued geomorphic development of beaches. Differences in sand supply and
geomorphic changes across nearshore regions have had an impact on beach recovery dynamics with the 2020–2024 lake-level fall. Beaches within coastal embayments that gained sand during the 2013–2020 lake-level rise are recovering more quickly, as seen at Montrose Beach, Chicago’s largest. Several new dune ridges have formed here since 2020, given sand influx during lake-level rise and rapid exposure of shallowly inundated beach terrains with lake-level fall. This has
Figure 4: GIS map panels for the same stretch of coast, around North Avenue Beach, showing ISGS-CGG data distribution, along with example core image, example subsurface geophysical data images, geomorphic change model (based on federal 2008 and 2020 topobathymetric Lidar data) and a map of sand thickness (based on 2020–2023 data).
Figure 3: GIS map panels showing how offshore vertical change assessments, slope maps and geological sampling and subsurface imaging aided construction of sand thickness models; example shown is lakeward of Northwestern University campus, Evanston.
Figure 5: GIS maps showing 2008–2020 vertical changes to beach and nearshore environments along the Chicago Uptown stretch of coast, which leads up to the Montrose Beach Dunes Natural Area, highlighted in a series of additional blow-up map panels showing 2020 topobathymetric DEM (from the US Army Corps of Engineers), 2023 topographic DEM (ISGS-CGG) and 2024 topographic DEM (ISGS-CGG). A vertical change model based on 2020 and 2024 DEMs is also included, which highlights major geomorphic developments with lake-level fall.
not been the case with other beaches, particularly those that lost sand during the lake-level rise.
Guiding coastal resilience
The availability of federal Lidar-based topobathymetric elevation models, along with targeted nearshore and beach topobathymetric monitoring and regional sand assessment data, are paving the way for an improved understanding of Great Lakes coastal morphodynamics along the urban lakefront. ISGS-CGG studies to date have revealed that: (1) sand supply, which itself is time-variable, can be drastically different from beach to beach due to sand trapping against natural and/or engineered structures; (2) lake-level rise, especially if at the metre scale over sub-decadal time frames, forces a regional cross-shore and alongshore rearrangement of sand deposits by way of storm-induced profile adjustments; and (3) individual beach recovery dynamics during lake-level fall and terrain re-exposure are influenced by prior geomorphic changes across the nearshore, time-varying patterns of sand supply and beach management activities.
Understanding offshore sand distribution and alongshore transport patterns can help inform beach managers in Chicago and elsewhere along the Great Lakes on likely near-future trajectories of beach geomorphic change with anticipated future lake-level fluctuations. The shape of the urban embayment and its groins and other shoreline structures are to be considered as well. While all beach shorelines along the urban lakefront overwash and retreat when lake levels rise, beach-specific geomorphic trends across the nearshore influence the beach recovery dynamics during subsequent lake-level fall, when inundated terrains become exposed. This is when informed management action can be taken to improve coastal resilience in anticipation of the next period of lake-level rise. Historical patterns of beach geomorphic change and knowledge of offshore sand distributions serve as guides to inform decision makers on the useful placement of sand fences and re-establishment of vegetation and rebuilt dune terrains, for example. They also point to where future problems can be anticipated for a given lake-level change trajectory.
Conclusion
his regional assessment of lake margin geology and geomorphology is the first of its kind along the Chicago coast of south-west Lake Michigan, a stormdominated margin undergoing metrescale lake-level fluctuations that modify sand transport dynamics in ways not fully understood. The more than 20 beaches along this stretch of coast are connected, to varying degrees, to the alongshore sand transport engine and respond differently to lake-level changes and storms. Precision mapping of coastal sand distributions using Lidar, sampling and subsurface geophysical imaging methods provides a sand supply context for beach geomorphic behaviour, as captured in topobathymetric monitoring datasets. The data-integrative approach provides useful information on how urban lakefront embayments are coupled to offshore sand resources and how lake-level changes drive patterns of sand redistribution along a fragmented urban littoral zone. This information can help guide future coastal management decision-making.
questions to... 5 Srdjan Sobol, Trimble Applanix
We asked Srdjan Sobol, airborne product manager at Trimble Applanix, to tell us more about how GNSS+INS OEM solutions are leading the latest revolution in mapping and surveying. Here, he explains how the integration of high-precision navigation with the capabilities of uncrewed aerial vehicles (UAVs or ‘drones’) offers even faster and more accurate data capture, while also opening up new opportunities in various industries: “The APX RTX system is designed to automate workflows, increase efficiency and reduce the cost of mapping deliverables.”
How would you describe the fourthgeneration Trimble APX RTX in a nutshell?
The Trimble APX RTX is a high-performance (GNSS+INS) OEM solution offering supreme efficiency and cost reduction for mapping using UAVs. It does so by leveraging Trimble’s global Centerpoint RTX Correction service to produce real-time and postprocessed positioning with centimetrelevel accuracy without reference stations. In addition to Centerpoint RTX, the APX RTX uses a full Trimble GNSS and inertial technology stack. This includes
Srdjan Sobol is an airborne product manager at Trimble Applanix with extensive field experience in the mapping industry. Since gaining his BSc in Electrical Engineering, he has specialized in GNSS-INS solutions and sensor integrations for
a low-noise and high-precision Trimble Maxwell survey-grade multi-frequency GNSS receiver, supported by Trimble ProPoint GNSS technology for improved integrity, enhanced constellation support and advanced signal filtering. This is combined with Applanix InFusion+ aided inertial software technology and Applanix SmartCal IMU calibration technology for highly accurate and robust positioning and orientation.
direct georeferencing and mapping applications.
What are the main advantages of direct georeferencing for UAV platforms?
The direct georeferencing (DG) method is a way of assigning the absolute geographic coordinate to the property of a mapping sensor (i.e. pixel/voxel) by computing the position and orientation of imaging payload with respect to the Earth to a very high degree of accuracy and precision, without the need of establishing or using ground control points (GCPs). Direct georeferencing
Figure 2: Direct georeferencing is applicable to all types of mapping sensors.
Figure 1: The Trimble APX RTX provides high accuracy in an extremely small package.
directly increases the efficiency and productivity of mapping with UAVs in several ways.
Firstly, DG increases the value of your UAV mapping platform by making it mission-type agnostic and enabling mapping anytime, anywhere – including in hazardous, inaccessible areas or long transmission corridors. Secondly, the DG method used in photogrammetry applications does not depend on pixelmatching techniques or require extensive image overlap/sidelap. It is directly applicable to flight mission execution, allowing for the coverage of larger areas with fewer flight lines. This resul ts in a virtual extension and optimization of your UAV battery life, relative to the coverage area for a given project. Direct georeferencing is applicable to all mapping sensors (RGB, thermal and multi/hyperspectral cameras, Lidar, radar…). Its ability for ultrafast georeferencing reduces production time of the final deliverable.
What benefits does the Trimble APX RTX offer to mapping and surveying professionals?
The Trimble APX RTX combines the best technologies – DG and the Trimble CenterPoint RTX high-accuracy correction service –to deliver an efficient and cost-saving solution to mapping and surveying professionals. This efficiency is achieved through the RTX Complete licence applied directly in the hardware. All the advantages of direct georeferencing translate into significant cost reductions by saving time and money at every stage of the project: from mission planning, to flying and data collection, to map production.
Moreover, our RTX Complete service offers ‘freedom to fly’ in multiple ways. For example, it removes the need for a ground base station as a GNSS augmentation source for real-time RTK or post-processing use. This reduces the amount of hardware required, saves setup time in the field, and allows for the quick and efficient generation of mapping products. Meanwhile, unlike single-base solutions in which performance is limited by baseline distance, RTX corrections are global. Th is eliminates planning complexity and enables missions to be flown anytime, anywhere, including beyond visual line of sight (BVLOS). Additionally, the consistent mapping frame datum definition keeps workflows automatic and reduces support costs by providing a consistent frame of reference for computed positions. And lastly, POSPac UAV allows immediate postprocessing in the field without the need for internet access or additional processing licences.
What is the primary target audience for the APX RTX system?
While the benefits and advantages of the APX RTX system are visible and applicable to mapping and surveying professionals such as flyers and end-users, it is primarily an embedded OEM solution tailored for payload integrators and manufacturers of sensors and UAVs. The lightweight design of 56g, small footprin t of 60x67x15mm and volumetric size of just 60cm³ allow for compact hardware integration with a variety of mapping sensors, without being a limiting factor in terms of overall load and payload capacity for drone support. Utilizing a unique dynamic modelling software technique, the APX RTX supports all types of drones, including fixed-wing, rotor and hybrid vertical takeo ff
Figure 3: Direct georeferencing enables mapping anytime, anywhere, including in long transmission corridors.
About the APX RTX
The evolution of UAV technology has revolutionized the field of mapping and surveying, offering unprecedented access to data and imagery with remarkable accuracy and efficiency. The Trimble APX RTX, as a cutting-edge GNSS+INS own equipment manufacturer (OEM) solution, stands at the forefront of this technological advancement, embodying the integration of high-precision navigation with the flexibility and reach of UAV platforms. This synergy not only enhances the quality and speed of data collection, but also opens up new possibilities for applications in various sectors including agriculture, construction, environmental monitoring and urban planning.
One of the standout features of the APX RTX system is its utilization of Trimble’s Centerpoint RTX Correction service, which provides global, centimetre-level accuracy without the need for traditional ground control points. This capability significantly reduces the time and cost associated with mapping projects, while also enabling operations in remote or difficult-to-access areas where setting up GCPs would be impractical or impossible.
Moreover, thanks to its adaptability to support all types of drones, plus its compatibility with a wide range of mapping sensors, the APX RTX system is a versatile tool that can be tailored to meet the specific needs of various mapping projects. Whether for detailed agricultural field analysis, construction site monitoring or environmental conservation efforts, the APX RTX system provides a robust platform for generating high-quality geospatial data.
and landing (VTOL). It also comes in four different performance levels to address the accuracy requirements of a specific mapping sensor and flying attributes of the UAV platform.
Is there anything else potential users should know about this new platform? As mentioned above, post-processing is supported through POSPac UAV without the need for separate licences. POSPac PP-RTX, as well as traditional processing methods like Single and SmartBase, are all included as part of the Centerpoint RTX Complete service and are enabled on the hardware firmware. The POSPac suite also features add-on modules such as Camera QC and LiDAR QC for payload boresight calibration, and seamlessly integrates with Lidar systems to perform trajectory optimization during weak GNSS conditions.
The Trimble APX RTX represents an efficient and cost-effective solution for meeting the ever-increasing demand for accurate and timely geospatial information. As a result, it empowers decision-makers with the data necessary to make informed choices in managing resources, planning development projects and protecting the environment. More
https://www.applanix.com/products/ apx-rtx.htm
Figure 4: Direct georeferencing allows for the coverage of larger areas with fewer flight lines.
Helping communities address the global plastic waste crisis
Mapping the plastic using UAV imagery
By Simon Ironside, Gordana Jakovljevic and Naa Dedei Tagoe
In response to the devastating effects of plastic on the Earth’s oceans, and specifically on waterways and coastal environments, the International Federation of Surveyors (FIG) has formed the Mapping the Plastic Working Group (WG 4.3). It has developed a plastic mapping solution, based on deep learning algorithms in conjunction with UAV-captured imagery, aimed at dealing with the problem before it reaches the ocean. The solution is already being used in Ghana, where several communities are severely impacted by the effects of plastic pollution.
Is the threat from plastic really apocalyptic? Surely it doesn’t compare to climate change or a worldwide pandemic? The answers, unfortunately, are yes, and it does (Campbell, 2022). Plastic is everywhere. It affects the daily lives of almost everyone on the planet, in all corners of the globe. Almost every piece of plastic ever made is still on our planet in one form or another, and 75% of it is now waste, with most of it discarded into landfills or dumped into marine environments (Fava, 2022). The United Nations Environment Programme estimates that only 9% of the nine billion tonnes of plastic produced globally has been recycled. As a result, each year, more than eight million tonnes of plastic end up in our oceans. This equates to around 15 tonnes of plastic entering our oceans every minute (UNEP, 2024), with 80%
of all litter in our oceans now made of plastic. Without action, the World Wide Fund for Nature (WWF) estimates that, by 2050, there will be more plastic in the sea than fish by weight (WWF, 2022).
Moreover, besides being difficult to recycle, slow to decay, and expensive and polluting to burn, plastic breaks down into microplastics (particles smaller than 4.75mm in diameter) that enter the food chain and cause harm to animals and potentially humans (McVeigh, 2022). There are an estimated 14 million tonnes of them residing on the sea floor (Barrett et al., 2020). Rivers are a recognized contributor to the ocean plastic problem. Plastic litter is predominantly concentrated on banks, coastal beaches and in the upper limits of surface water bodies during
the transportation process. Researchers estimate that ten major river systems carry more than 80% of the plastic waste that ends up in the Earth’s oceans, with eight of them located in Asia (Schmidt et al., 2017; Figure 2).
The international surveying community’s response
As the international surveying and spatial sciences community’s response to this huge global problem, the International Federation of Surveyors (FIG) – representing the interests of surveyors in over 120 countries – has formed the Mapping the Plastic Working Group (WG 4.3) to help combat the global plastic waste crisis. It is a combined initiative of FIG Young Surveyors Network and Commission 4 (Hydrography). Given the
Figure 1: The UNEP 2023 report ‘Turning off the Tap’ illustrates the problem, based on a snapshot of 2020 data, in million metric tonnes (MMt).
Figure 2: 80% of the plastic polluting the world’s oceans comes from these ten rivers. (Image courtesy: Schmidt et al., 2017)
GIS, remote sensing, hydrographic surveying, project management and overall measurement science skillsets within the group, the WG 4.3 focus is on the plastic waste transported in waterways and along coastlines. The aim is to find ways of dealing with the problem before it reaches the ocean.
Currently, much of the plastic waste data has been obtained from large-scale empirical estimates or from detailed plastic litter surveys, which are confined to relatively small areas. The numerous global estimations of plastic debris entering oceans annually are typically based on local or regional-scale surveys, and vary from 250,000 tons (Erikson et al., 2014) to 4.8-12.7 million tons (Jambeck et al., 2015). Therefore, the amount of plastic in the world’s oceans remains poorly understood, resulting in a knowledge gap in terms of the temporal and spatial distribution of plastics, degradation and beach processes.
Several efforts have been made to establish a standardized monitoring methodology, such as Oslo and Paris Conventions (OSPAR) (OSPAR, 2020), Commonwealth Scientific and Industrial Research Organization (CSIRO) (Hardesty et al., 2016), National Oceanic and Atmospheric Administration (NOAA) (Opfer et al., 2012) and United Nations Environment Programme/Intergovernmental Oceanographic Commission (UNEP/IOC) (Cheshire et al., 2009). Those methodologies are based on traditional beach monitoring by visual counting of plastic pieces along transects. However, the accuracy of the beach survey results is dependent upon the skill of the observer, and the differences in these protocols make the integration and comparison of beach litter survey data difficult. They are also timeand labour-intensive, and only measure those items discarded during transportation rather than the amount of plastic being conveyed within a river system.
New research tackles data issues
Potentially resolving these issues, new research has shown that remote sensing data from satellites and airborne platforms can be a reliable source of information over large geographic areas.
Assessment of the spatial extent and variability of plastic is possible due to the unique spectral signature of polymers in the near-infrared part of the electromagnetic spectrum. Therefore, research by WG 4.3 members at the University of Banja Luka in Bosnia and Herzegovina and the University of Novi Sad in Serbia has concentrated on distinguishing plastics from surrounding litter/debris classes using remote sensing techniques.
The initial research focused on the analysis of satellite imagery. An object-pixel-based algorithm for mapping plastic distribution in surface water using red, green, blue and multi-spectral images from high-resolution WorldView2 satellite images was developed. The paper was subsequently published and presented at the FIG Working Week held in Hanoi, Vietnam, in May 2019 (Jakovljevic et al., 2019). This research was subsequently refined to focus on higher-accuracy plastic detection over smaller geographic areas (Figure 3) using imagery captured by uncrewed aerial vehicles (UAVs or ‘drones’). To meet internationally recognized litter assessment frameworks, the aim was to successfully develop a surveying and mapping solution to accurately detect, extract and classify floating and land-based plastic particles as small as 0.01m2 (Jakovljevic et al., 2020).
4: Ground-truth data and results of the classification using the four tested models for detecting different plastic materials, located underwater (a) and above water (b–d).
Figure 3: Workflow used in this study.
Figure
Solution based on UAV imagery and deep learning
The resulting solution uses deep learning algorithms to extract floating and land-based plastic from high-resolution UAV-captured images of ‘hotspot’ locations. Customizable flight routes at low-level altitudes in combination with new algorithms, such as the structure from motion (SfM) algorithm, for photogrammetric processing provide a cost-effective solution for the acquisition of geospatial data. UAVs provide the appropriate spatial and temporal resolution to produce suitable data for mapping plastic.
For pre-processing, orthophotos are generated from the captured aerial imagery of each UAV survey. The pixel classes are delineated and labelled, and then merged using multi-resolution segmentation algorithms to create non-overlapping polygons. As this research is the first attempt to map floating plastic data from UAV imagery, previous ground-truth data is not available and the delineation, classification and labelling is done manually. It is expected that the pre-processing time will reduce as the polymer ‘libraries’ grow, and this process becomes increasingly automated.
Instead of time-consuming and laborintensive methods largely based on visual interpretation and manual labeling of plastic pieces, the WG 4.3 solution employs the U-Net architecture (specifically, the ResUNet50 algorithm) with an encoderdecoder structure for image classification, including automatic classification, object
detection, and semantic segmentation. The encoder generates feature maps with high-level semantic information but low resolution, while the decoder upsamples these feature maps to retrieve spatial details and achieve fine-scaled segmentation results (Jakovljevic et al., 2020). This end-toend semantic segmentation model, based on the U-Net deep learning architecture, is used for classifying both floating and landbased plastic. The encoder abstracts pixel information, and the decoder extracts plastic from orthophotos.
Testing in Bosnia and Herzegovina
To test and refine the plastic mapping solution, UAV imagery was processed from two sites in Bosnia and Herzegovina (Figure 6). One, at the confluence of the Crna Rijeka and Drina rivers, was chosen because it is inundated with plastic and other waste. The other, at Lake Balkana, is a pristine deep-
water lake. These sites were selected to test the algorithm’s ability to differentiate floating and submerged plastic from the surrounding water body, and to test its ability to differentiate floating and landbased plastic from other litter (Figure 5).
The algorithm detected plastic accurately in shallow water, which is a challenging environment for mapping plastic because the presence of the river bed increases water reflectance. Unexpectedly, the algorithm accurately extracted the plastic pieces that were omitted from the training data, showing good generalization abilities. The model also demonstrated its ability to detect plastic not just in water, but also on land.
Plastic mapping in action in Ghana
An example of this plastic mapping solution in action is in Ghana, where plastic pollution and other litter is severely impacting communities, especially along the coastline (Figures 7 and 8). The government has committed to addressing the problem and has implemented a National Plastic Action Partnership between the government, experts, civil society, community organizations and the private sector. The Ministry of Environment, Science, Technology & Innovation (MESTI) plays a key role in its implementation.
There is infrastructure available to recycle selected plastic waste in Ghana and, currently, informal recyclers scour the beaches and other locations for items of value, such as plastic bottles. Payment is on a per-weight basis (Figure 9). However, the waste that has no value is left behind.
Figure 5: The solution was tested at Lake Balkana (left) and the Crna Rijeka river (right).
Figure 6: The algorithm successfully extracted and classified different plastic litter types (left) from the orthophoto (right) in the Crna Rijeka river.
Government initiative: ‘Plastic not Seen’
The preliminary UAV surveys undertaken at Korle Gonno Beach in Accra by students from the University of Mines and Technology, Takoradi, were assessed under the supervision of Dr Naa Dedei Tagoe and processed by WG 4.3. Because the imagery and processed results accurately depict the situation at each location at any given time, there is no basis for misunderstanding or dispute about the amount of plastic seen or not seen by any of the parties involved. The scientific rigour of the data capture and processing has gained the confidence of all participants in Ghana, fostering a concerted effort for change.
Based on this outcome, MESTI is now implementing the ‘Plastic Not Seen’ initiative (Figure 10). This is an expansion of the current informal recycling process, whereby local people are assigned a portion of coastline adjacent to their settlement and paid to clean and maintain it. The work programme for using UAV-captured imagery to classify the type and extent of the plastic and other waste at predetermined locations is currently being finalized. Initial UAV surveys at each location
The data informs decisionmaking, and provides affected communities with accurate information on critical climate change indicators such as sealevel rise or deforestation
will establish the baseline waste levels, with subsequent monitoring surveys undertaken to determine the level of compliance with the agreed clean-up plan.. The baseline and monitoring surveys will be undertaken locally, with the captured imagery being processed remotely – at least initially – by WG 4.3.
Conclusion
Floating plastic and other pieces of litter are predominantly concentrated on the surface or within the upper limits of a water body. The deep learning algorithms in the WG 4.3 solution have been trained to extract plastic and other litter from the surrounding water (salt or fresh) by differentiating the spectral signatures of the plastic and the water body. This methodology can be used to map plastic waste in marine environments, including the so-called ocean gyre ‘garbage patches’, with UAVs deployed from a support
vessel and the survey results processed onboard or remotely. Although initially developed to map floating plastic in rivers
Figure 7: Korle Gonno Beach, Accra, Ghana, on 8 May 2024. (Image courtesy: Naa Dedei Tagoe)
Figure 9: Payment is by weight with the price varying between 1 to 3 Ghanaian Cedi (as of May 2024), depending on the type of plastic. (Image courtesy: Torben Lund Christensen, Danish Association of Surveyors)
Figure 11: UAV demonstration at Korle Gonno Beach, courtesy of the Ghana Lands Commission, in May 2024. (Image courtesy: Torben Lund Christensen, Danish Association of Surveyors)
Figure 8: Korle Gonno Beach, Accra, Ghana, is one of many coastal locations throughout the country where informal settlements of indigent people eke out a subsistence living, partly by informally recycling plastic waste. (Image courtesy: Torben Lund Christensen, Danish Association of Surveyors)
Further reading www.werewolf.co.nz/2022/01/gordon-campbell-on-the-globalwar-against-plastics, Gordon Campbell, 12/01/2022
Jakovljevic, G., Govedarica, M. and Alvarez-Taboada, F. (2020). A Deep Learning Model for Automatic Plastic Mapping Using Unmanned Aerial Vehicle (UAV) Data. Remote Sens., 12, 1515. https://doi.org/10.3390/rs12091515
Schmidt, C., Krauth, T. and Wagner, S. (2017). Export of Plastic Debris by Rivers into the Sea. Environ Sci Technol., 51(21), 12246–12253. https://doi.org/10.1021/acs.est.7b02368
Eriksen, M.; Lebreton, L.C.M.; Carson, H.S.; Thiel, M.; Moore, C.J.; Borerro, J.C.; Galgani, F.; Ryan, P.G.; Reisser, J. Plastic pollution in the world’s oceans: More than 5 trillion plastic pieces weighing over 250,000 tons afloat at sea. Plos One 2014, 9, e111913, doi:10.1371/journal.pone.0111913.
Figure 10: Informal coastal settlements will play a major role, and be a principal beneficiary of, the Plastic not Seen initiative. (Photo credit Torben Lund Christensen, Danish Association of Surveyors)
and waterways, this mapping solution has also proven effective in mapping land-based plastic waste and other litter, thus broadening its scope to environments such as beaches and riverbanks.
Accurate knowledge of the problem is a necessary prerequisite to finding its solution. This technology represents a significant breakthrough, enabling the policymakers of small island developing states (SIDS) to better understand the extent of the problems they face cost-effectively. The extent of each survey is limited only by the operational parameters of the UAV, and the only field personnel required are a suitably qualified pilot and spotters. As a result, this mapping solution is relatively inexpensive. The algorithms can be modified to identify other features, and the orthophotos can be processed remotely or locally depending on the available infrastructure.
About the authors
Simon Ironside, a licensed cadastral and Level 1 hydrographic surveyor from Christchurch, New Zealand, works for Toitū Te Whenua Land Information New Zealand. With extensive experience in both disciplines, he is a Fellow of Survey + Spatial New Zealand and has been actively involved with the organization, co-directing the FIG Working Week in Christchurch in 2016. Ironside has also contributed significantly to FIG, particularly within Commission 4, where he chairs several working groups, including WG 4.3 Mapping the Plastic. As co-chair of this group and a member of the FIG Climate Compass Task Force, his presentation today will provide an update on efforts to address the global plastic waste crisis.
Gordana Jakovljevic, assistant professor at the University of Banja Luka, Bosnia and Herzegovina, specializes in remote sensing, deep learning, and environmental protection, focusing on water management. Her research aims to standardize real-time processing of remote sensing data for better environmental decision-making. She is a member of FIG Commission 4 and the FIG 4.3 Working Group on Mapping Plastic.
Dr Naa Dedei Tagoe, a senior lecturer at the University of Mines and Technology, Tarkwa, holds a PhD in Geomatics from the University of Cape Town. She also serves as a member of the FIG Climate Compass Task Force.
The data informs decision-making, and provides affected communities – including indigenous communities – with accurate information on critical climate change indicators such as sealevel rise or deforestation. It can also inform cases for equity and economic justice in international forums, and ongoing monitoring ensures adherence to international agreements. Therefore, although it does not directly address the root causes of coastal pollution, this technology is an innovative and sophisticated step towards plastic waste eradication at a community level.
proudly present our premium members, the assadors of tomorrow's geomatics! you be next?
Gold members
Silver members
Bronze members
How the transformation of the country’s land administration processes is progressing
The long road to digitization in Kenya
By Dennis Mbugua Muthama and Catherine Gateri
Kenya is establishing a digital national land information management system (NLIMS) known as Ardhisasa to improve the effectiveness of the country’s land administration system. So far, progress has been plodding because the overarching objective of ensuring land information accuracy comes at the expense of speed. This article outlines Kenya’s cadastre and non-spatial data digitization and conversion processes in the context of land administration.
Before establishing colonial rule in Kenya, communities administered their land using communal rules and practices, such as Kikuyu’s Githaka landholding system (Okoth-Ogendo, 1991, p.63). Between 1895 and 1963, the British introduced a new land administration system in which the established land law infrastructure mediated the relationship between communities and their land parcels. At the core of this is the documentation of land rights holders and the establishment of a public authority to administer the established land rights and their respective holders. Among other things, the postcolonial impacts of this new regime of possession include inconsistent land
records, absentee landlordism, entrenched land corruption and historical land injustice claims.
Historical land injustice claims are among the issues being addressed by the National Land Commission (NLC), which was one of the post-2010 constitutional land reform measures. The other key post-2010 legal reforms included the introduction of new land laws, namely the Land Act of 2012, the Land Registration of 2012 and the Community Land Act of 2016. These laws and their supporting mechanisms are primarily aimed at streamlining land administration, with the overall goal of improving the delivery of land
administration services and the efficiency of the land market. One way this is being done is by introducing electronic land transactions premised on institutional and legal reforms.
A history of modernization attempts
The manual land administration system has long been identified as a possible cause of many protracted land administration problems (RoK, 2021). As a result, dating back to 2004, several initiatives have been attempted, such as the ‘Project for Improving Land Administration in Kenya’ (PILAK), which had modernization of the land administration system as one of its objectives (RoK, 2021). However, the lack
Figure 1: A screenshot of the property holder account dashboard (captured during the Ardhisasa stakeholder meeting and training presentation in July 2024).
of sustainability plans and the entrenched importance of land to Kenya’s society, economy and culture means that the different initiatives did not bear the anticipated fruits. The post-2010 land law reform theory of change was that establishing a new apolitical institution, the NLC, would reduce public land corruption. In this vein, the mandate to set up the National Land Information Management System (NLIMS) was placed under the NLC. However, in 2016, the mandate was transferred back to the ministry in charge of land administration following the passage of the land law amendment. The reforms also restructured devolved land governance by removing county land management boards (CLMBs). Following these reforms, the ministry embarked on developing NLIMS, today referred to as ‘Ardhisasa’, and establishing the required policy, legal and institutional framework.
Ardhisasa was developed to reduce the backlog in land transaction processing by fast-tracking land property searches, registration, valuation and title issuance, culminating in accelerated investment and development of land as capital. In the longer term, this should resolve the land administration and management challenges of manual, paper-based transactions, introduce a more efficient and integrated land management system, and provide a tool to counter fraud and corruption within the land sector (Fenna & Gateri, 2024).
Ardhisasa: slow but sure development
There are two schools of thought regarding the implementation of Ardhisasa. One is incremental implementation, which holds that the implementation should be broken down into constituent
land administration processes and then implemented with a clear roadmap, timeline and milestones. The other school of thought is the implementation of the entire system. The ministry opted for the latter, perhaps because of the failure of the previous incremental approach. Ardhisasa’s holistic establishment and development involved both administrative and technical actions. Among other objectives, administrative actions included forming a taskforce to establish the status of electronic land transactions, making the necessary arrangements for ICT improvements, investments at key installations such as land registries, and capacity-building efforts. Related to this and key to the functioning of Ardhisasa are the technical activities that started with the mundane activities involved in converting land records from paper to digital. In the land registries where preparatory activities such as file sorting and categorization were conducted, the registries were closed briefly, during which land transactions were impossible.
Digitization of documents and maps
One of the first digitization tasks is the preparation and digitization of different landholding documents. This involves identifying, cataloguing and sorting relevant land parcel files within the land registries. The verification and validation of the sorted files follow this. After verification, each document within the files is barcoded, digitally captured and uploaded to the electronic document management system (EDMS). The digitized files are returned to the file storage room (Datta & Muthama, 2024). The digitization workflow involves different land ministries and temporary staff who play different roles. This process is slow and tedious because each land parcel file comprises the land transaction history of the corresponding land parcel, including all land transaction-related documentation. Thus, the number of files could be extremely large.
Similarly, building a digital cadastre on the QGIS platform involves a long and meticulous workflow. To digitize the paper-based maps, the surveyors working on the cadastre follow these steps (see Figure 2):
1) Scanning the paper-based spatial data sources such as Registry Index Maps (RIMs)
2) Extracting the list of coordinates
3) Transforming from Cassini grid coordinates to UTM coordinates
4) Selecting the control and georeferenced maps
5) Populating the plan directory
6) Digitizing the land parcels
7) Adding attribute data to the digitized land parcels
8) Quality control of the digitized land parcels and their attribute data.
This process is slow and tedious, not least due to low staffing levels, ongoing land transactions and the many documents involved. For example, in the county of Murang’a alone, there were 274,382 RIMs, not including other key paper documents such as fixed surveys. The other critical digitization process is conversion. As envisioned in the Land Registration Act, certificates of titles and leases must be migrated to a unitary regime. Through this process, land reference numbers issued using previous land registration laws are being changed to a unitary block system (e.g. LR No. xxx/xxx became Nairobi Block xxx/xxx). The process also involves the preparation
Figure 2: A visualization of the spatial data digitization workflow (captured during the Ardhisasa stakeholder meeting and training presentation in July 2024).
of cadastral maps and a conversion list indicating new land parcel numbers within each land registration section or unit.
Based on the experiences with the cadastre and non-spatial data, digitization workflow processes will likely take longer than planned implementation. Unlike in other digitization processes, every piece of data is essential for establishing each land parcel’s transaction history in line with the Torrens ‘Mirror’ principle. Consequently, all the land parcel file documents must be authenticated and verified In essence, this approach prevents the ‘garbage in, garbage out’ problem common to many information systems when errors within the source documents are replicated.
Conclusion
One of the key arguments for the shift to digital land administration is that it will cure previous land administration issues, such as corruption and missing land records. In turn, this increases the pressure on the implementers of Ardhisasa to get it right. This pressure has cascaded down through the many document
About the authors
Dennis Mbugua Muthama is a postdoc research fellow at the British Institute in Eastern Africa-Regional Futures Project and a lecturer at the University of Nairobi’s Department of Real Estate, Construction Management & Quantity Surveying in the Faculty of the Built Environment. He is a trained land economist and a professional land administrator.
Catherine Gateri is a senior research fellow at the British Institute in Eastern Africa and a collaborator on the Regional Futures Project. She is also a lecturer at Kenyatta University’s Department of Construction and Real Estate in the School of Architecture and the Built Environment. She is a land economist and an urban and regional planner.
verification stages. The high degree of regulation is aimed at avoiding digitization of inaccurate land administration. As a result, progress has been slow. For instance, since its launch in 2021, the system has only been operational in two counties. Critics have argued that this snail-paced implementation indicates failing implementation. However, drawing on their research and professional experience, the authors argue that the speed of implementation of the system should not be a measure of success. They advocate for continued critical inspection of data, as digitization does not constitute the whole land administration system but is instead part of a larger integrated whole.
Further reading
Okoth-Ogendo, H.W.O. 1991. Tenants of the Crown. Evolution of Agrarian Law and Institutions in Kenya. Nairobi: ACTS Press, African Centre for Technology Studies, Kenya.
Republic of Kenya (RoK). 2021. The Report of The Cabinet Secretary to the National Assembly on Achievements and Progress in the Financial Year 2020/2021. Ministry of Lands and Physical Planning.
Datta, A. & Muthama, D.M. 2024. Sorting paper: The archival labour of digitising land records in Kenya. The Geographical Journal, 00, 1–12.
Hoefsloot, F. I., & Gateri, C. (2024). Contestation, negotiation, and experimentation: The liminality of land administration platforms in Kenya. Environment and Planning D: Society and Space, 0(0).
Figure 3: Ardhisasa is a critical milestone in transforming land management in Kenya.
Cutting-edge UAVbased solutions, research and online education
Born from the ITC expertise in Geo-information, the UAV centre has developed cutting-edge research that is now available in online courses tackling specific advanced problems.
Expand your knowledge and skills in a global community of learners. A selection of courses fed by research is available online through the Geoversity platform. They are advanced courses tackling specific problems and several use cases.
To find out more scan the QR-code or visit: ut.onl/geoversity
The
importance of combining the best of both worlds
High-quality Lidar data processing services vs automatic processing with AI
The increasing affordability of high-performance Lidar technolog y is driving the emergence of numerous new market players. Meanwhile , the use of AI models is facilitating the automation of various tasks.
To what extent can Lidar data processing be automated? And why are specialized partners in high-quality Lidar data processing services still so important?
In today’s constantly advancing world, Lidar technology is reaching a performance level that allows for results with high point densities, a large number of echoes recorded per pulse, great precision, and a high capacity to distinguish different objects. Simultaneously, Lidar sensors have become more affordable, lighter, more precise and more versatile. This evolution has opened up opportunities for companies worldwide to enter the data
capture business by incorporating this cutting-edge technology, whether using aerial vehicles (crewed or uncrewed), road vehicles, backpacks or even handheld devices.
Hardware and software have evolved alongside sensors, allowing the processing and editing of tens of millions of points on a standard CAD, as well as online real-time visualization of data on remote servers or in the cloud – something that was unthinkable not long ago. Professionals now have access to an increasing amount geospatial data of everimproving quality, while other parameters such as update frequency and available data types are also expanding.
Automatic detection of power line components based on AI from imagery.
Emergence of new players
With the advent of Lidar sensors installed on increasingly affordable uncrewed aerial vehicles (UAVs or ‘drones’), the market has seen a shift from a few specialized companies to the emergence of hundreds of new players. However, the ease of investing in data capture hardware does not always guarantee the transformation of that data into valuable products. Activities such as processing and classifying Lidar data still require significant effort and can become a bottleneck in the production chain. On the other hand, the availability of geospatial data has solidified previously emerging applications and facilitated the emergence of new ones.
Thus, in addition to products like digital elevation models, contour lines, orthophotos and mosaics, the market demands other solutions such as 3D vector mapping, customized distance reports, simulation of power line positions under different environmental conditions, 3D models, digital twins and so on. Exploiting these data types requires efficient systems for storage, quality control and publication on geoportals, as well as the ability to analyse the data to obtain complex derivative products in increasingly shorter times.
The use and limitations of AI
Increases in computing power have enabled the application of algorithms for more efficient processing of large volumes of data.
The use of artificial intelligence (AI) models, with their learning and pattern recognition capabilities, is facilitating the automation of tasks. However, there are still processes that require significant effort – such as performing and managing labour-intensive interactive editing tasks, heuristically defining algorithm parameters, and ensuring quality control of all processes (manual or automated) to achieve final products with near-100% reliability.
Greater utilization of data
The increasingly demanding market seeks greater utilization of data and more complex derivative products. For example, precise modelling of a watershed or snow cover in high mountains can also be used to conduct flood risk analysis and prevent catastrophic situations. It is not enough to capture the
geometry of a high-voltage power line and determine the distances to nearby vegetation; its behaviour should also be modelled against atmospheric phenomena such as temperature variations or wind. And thanks to forest inventory work using Lidar, tree mass can be determined to create fuel model maps for predicting and preventing wildfires, monitoring and reducing emissions, and so on.
Geospatial data capture work is often subject to temporal peaks due to client-imposed deadlines, climatic and weather conditions, and other external circumstances such as flight permits, administrative delays, etc. Even the largest companies might be forced to have an oversized structure in terms of both technical and human resources. For small
Automated Lidar data classification based on AI.
3D digitization of assets using Lidar data.
Customized vegetation clearance reports near power lines under different weather conditions. companies trying to take advantage of the latest sensor technology opportunities, the need to have such processing capacity can pose a barrier to entry into this market.
Investment in R&D&I
This is where the importance of having specialized partners in Lidar data processing, such as DIELMO 3D, comes in. With over 20 years of experience in the sector, DIELMO 3D has developed advanced solutions for the processing, management and publication of geospatial data, and has a robust and flexible processing infrastructure, high production capacity and service customization. By working with a significant range of clients, the firm can distribute the workload more efficiently over time, allowing data capture companies to focus exclusively on their core business without worrying about the complexities of subsequent processing.
DIELMO 3D’s investment in research, development and innovation (R&D&I) not only guarantees high production capacity, but has also been used to develop custom solutions for highly specialized products. These include distance studies of vegetation to power lines and railways, custom report preparation, power line modelling under different environmental conditions using PLSCADD or the company’s own alternative tool, tree measurement for forest inventory, AI model development, web geoportals
About the Authors
Jose Carlos is CEO at Dielmo 3D, a leader in providing solutions based on geospatial data, including topography, digital mapping, inventory management and infrastructure maintenance. From its offices in Spain and the USA, Dielmo 3D serves clients across all five continent, including the processing of geoespatial data for survey companies, training artificial intelligence models, and developing customized algorithms to determine the most optimal methodology in finding solutions.
for visual infrastructure inspection management, and geoportals for private or public agencies, to name but a few.
The
benefits of collaboration
Collaborating with experts in Lidar data processing allows data capture companies to concentrate on their core activity more efficiently, from the moment they acquire Lidar equipment, without worrying about processing and obtaining very specific products. For companies with significant experience and resources who already have a data processing structure, this collaboration provides flexibility and the peace of mind of being protected against temporary work peaks or unforeseen events. This keeps their processing structure within efficient limits. Additionally, thanks to the technological capability and adaptability of their partners, they can develop tailored solutions and establish solid strategic alliances.
While artificial intelligence and automated processes offer many advantages, some tasks still require human intervention to ensure the precision and reliability of the final products. DIELMO 3D combines the best of both worlds: it uses AI-based automation to optimize processes, but also brings the experience and production capacity necessary to control and improve results. This avoids the problems that can arise when these advanced tools are operated by those without adequate experience and ensures accurate and highquality final products. In this way, DIELMO 3D helps clients to maintain long-term trust among their own customers, securing customer satisfaction and loyalty in today’s competitive market.
Questions to... 5
5 questions to… Alex Baikovitz, Mach9
Mach9 is on a mission to redefine how unstructured 3D data is transformed into maps automatically, enabling faster and more accurate solutions for infrastructure design and asset management. At Trimble Dimensions 2024, we caught up with co-founder Alexander Baikovitz to ask him about the company’s journey, its position in the geospatial ecosystem, and its vision for the future of infrastructure development.
What does Mach9’s mission of accelerating the development of global infrastructure with its geospatial data solutions mean in practice?
Everything that’s designed and built starts with a map. Improving the accuracy and productivity of map-making is transforming how surveyors and engineers build the next generation of infrastructure such as roads, utilities, telecommunications and
many other systems that keep the world running.
We’ve created – and are actually defining – a new category of engineering software that we call ‘automated geospatial production’. With AI-powered feature extraction we’re helping our partners make maps from mobile mapping data up to 96 times faster than manual methods. We also provide really powerful and intuitive quality assurance and quality control workflows to accelerate project delivery. We’re seeing surveying and engineering firms around the world use our solutions on projects ranging from city blocks to an entire subcontinent, helping them make accurate maps faster than ever before.
How did Mach9 become the company it is today?
Some of my advisors at Carnegie Mellon were early roboticists, and
some came from a civil engineering and survey background. Our early days in infrastructure, working on programmes for Fluor and the US Department of Energy, led us to the core ideas that became Mach9. When we started the company four years ago, we brought together leading AI researchers and engineers from Carnegie Mellon and the autonomous driving industry to transform how computers make digital maps. We started actually building mobile mapping hardware. We learnt first-hand about the challenges in creating point cloud data, and quickly came to understand the true meaning of accuracy and productivity in the surveying and engineering industry.
There are some great organizations like Trimble, RIEGL, Leica, Teledyne and many others that are building the most productive ways to capture data, but not all the industry’s challenges are purely
Mach9 transforms complex image and point cloud datasets into precise 2D and 3D engineering deliverables with speed and efficiency.
hardware-related. We found a meaningful gap in how people turn the resulting point clouds into the valuable schematics, blueprints and maps that are needed to design, develop and maintain large infrastructure systems and networks. And so about two years ago, we pivoted the company away from hardware towards building Mach9 Digital Surveyor – a software-only solution to address the gap in generating high-fidelity, accurate and fast insights at scale to meet the infrastructure industry’s rising needs for automation.
How is your company embedded in the broader ecosystem of the geospatial industry?
Mach9 Digital Surveyor is interoperable with existing CAD and GIS tools, as well as many other asset management and project delivery tools that our customers and partners rely on to execute on their projects. And the importance of interoperability has been underlined once again here at Trimble Dimensions: it’s essential to connect the powerful data from the mobile mapping hardware that Trimble and other equipment
About Alex Baikovitz
Alexander Baikovitz co-founded Mach9 in 2021 with the goal of building a platform for automated geospatial production, leveraging advanced AI feature extraction to swiftly transform unstructured 3D data into maps. He previously worked as a Space
manufacturers to computer-aided design (CAD) software like Bentley MicroStation or Autodesk Civil 3D for design engineering, and geographic information systems (GIS) software like Esri ArcGIS for asset management.
Data interoperability among automated geospatial production software like Mach9 Digital Surveyor with common engineering software systems is critical. By connecting Mach9 maps into downstream infrastructure design and asset management project workflows, we are helping surveyors, engineers and owner/ operators to accelerate the development of the next generation of infrastructure around the world.
How does your solution unlock value across various sectors and applications?
Mach9 is able to extract a superhuman amount of detailed information about the physical world. Often, we fit into two complementary systems and workflows: one in the infrastructure design space, and the other in infrastructure asset management. On the design side, we’re automating workflows to create the schematics and as-builts to construct major infrastructure systems. On the asset management side, we provide the fastest solutions for processing massivescale mobile mapping data, even covering hundreds of miles, with maps delivered overnight.
Processing that used to take two to four days per mile in some cases can
Technology Research Fellow at NASA Ames Research Center, and gained experience as a robotics researcher at Carnegie Mellon University, USA. He earned both his BSc in Mechanical Engineering and MSc in Robotics from Carnegie Mellon.
now be done in around ten minutes instead. This is truly transformative and illustrates the huge productivity and efficiency gains that we’re bringing to the industry, enabling the delivery of higher-fidelity information up to 96 times faster, without sacrificing accuracy. Our solution is being used to automate survey applications and production workflows for designing transportation infrastructure, and maintaining assets like signage across entire states, countries and even continents.
What can we expect from Mach9 in the near future?
We are on track to extract hundreds of thousands of miles’ worth of mobile mapping data in 2025. And we see the infrastructure industry’s growing need to continue to optimize how people are collecting this information and how they can use it. Our customers and partners across many critical industries – owners, operators, and engineers in transportation, utilities, rail and other critical infrastructure – are responsible for tens of millions of miles that we’re positioned to help map over the next few years. We are confident that Mach9 will be able to address the industry’s growing need for high-resolution, high-fidelity information at scale. That is what Mach9 Digital Surveyor can help our partners accomplish in effectively every state across the USA, Canada and far beyond.
Alex Baikovitz.
A look back at Trimble Dimensions 2024
Rolling the innovation dice
By Wim van Wegen, head of content, GIM International
This year’s vibrant Trimble Dimensions event took place in Las Vegas from 11-13 November. As one of Trimble’s long-standing media partners, GIM International was once again on the scene for an extensive update on the company’s short and long-term developments. Here are some of the key highlights from the large-scale industry gathering.
Since being founded as a manufacturer of geodetic measuring instruments, Trimble has evolved into a company servicing a much broader audience, not only in construction but also in adjacent fields. However, the company now seems to be narrowing its focus somewhat, as it recently sold an 85% stake in its agriculture business to AGCO, a major American agricultural machinery manufacturer.
Data-centric workflow
The geospatial component still forms the core of the company, however, as surveying plays a key role in Trimble’s solutions and activities. These can be described as ‘connecting the physical with the digital’, with the Connect platform serving as the underlying foundation.
The connection between the field and the office was clearly on display at the Offsite Expo, which – just as in every ‘even’ year–was part of the 2024 Dimensions event. Attendees headed outdoors to a 6-hectare site a 30-minute drive from Las Vegas, where they could see hands-on demonstrations of machine control, monitoring, mapping and positioning in action. Hardware and software solutions were presented as equally important pieces of the data-centric workflow. “We try to tell a story around data,” said Riley Smith, Trimble’s marketing director, monitoring, mining and tunnelling, during the guided tour of the Offsite Expo for media partners.
Sensors and signals
The integration of sensors to enable autonomous solutions was a recurring topic
in Las Vegas, not only during the demos at the Offsite Expo but also in many of the presentations and workshops throughout the gathering of 7,000 participants. “Point clouds are great, until you have to do something,” one participant remarked. In recognition of this sentiment, Trimble is on a mission to simplify the geospatial data workflow to broaden the reach by reducing the need for users to have a background in mapping or surveying.
In the architecture, engineering and construction (AEC) sector, Trimble’s geospatial technology-based solutions are tailored to what surveyors need on site in order to make 3D models that serve as dependable starting points for construction, thus saving costs. Besides sensor-derived spatial data, there also are developments on the GNSS receiver side. While little thought is usually given to the receiving and processing of GNSS signals, even just a 1% or 2% improvement in signal reception can mean huge gains for land surveyors. For example, the faster fix times and fewer interruptions enable more precise and dependable survey results at lower costs, thus enhancing accuracy, reliability and efficiency, especially in challenging environments.
Attracting new talent
The search for skilled young professionals was another topic that received attention. Smith, who has a BSc in Geomatics, stated: “I remember telling people what I was studying, and no one understood what it was. That’s why I feel an importance to educate the world on the profession.” Rob Painter, Trimble’s CEO, saw an opportunity to
attract new people thanks to the similarities between what the geospatial industry is offering and the technology today’s youngsters use in daily life. “We are maybe one of the best kept secrets, but we don’t want to be a best kept secret,” he said. He was referring specifically to Trimble’s solutions to reduce man-hours, but this definitely also holds true for the surveying profession as a whole.
Rob Bisio, senior vice president field systems at Trimble, explained that he sees a role for geospatial professionals as ambassadors for their work: “People who are already in the field are very excited about what they do. We have many distribution partners that give guest lessons to classrooms.”
Meaningful implementation of AI
Trimble Dimensions provided an interesting glimpse into the company’s latest efforts
Trimble CEO Rob Painter welcomed Brendan Hunt onto the main stage.
to integrate artificial intelligence (AI) into construction workflows. Trimble is leveraging its extensive expertise, innovative portfolio and domain-specific datasets to lead this transformation and reshape the construction industry. In Las Vegas, the company unveiled real-world AI applications that are automating processes, boosting productivity, enhancing safety, reducing costs and improving efficiency.
One notable example in this context is the AI integration within Trimble Business Center, a survey CAD software solution designed to streamline repetitive and previously labour-intensive activities like point cloud segmentation and classification when processing reality capture data. This enhancement frees up more time for professionals to focus on strategic tasks, accelerating project timelines and improving data accuracy. Likewise, Trimble pairs its hardware with AI for applications ranging from road inspections and asset management to stockpile volume monitoring. By automating feature extraction and delivering actionable insights, these solutions empower informed decision-making throughout the construction lifecycle.
Connected, intelligent systems
was the ambition that resounded at multiple moments. “We have to support an ecosystem of open data standards,” stated Bisio. Schwartz echoed this, talking about putting effort into more and more open standards. “Our biggest competitors are here!” he pointed out, to illustrate that the company’s commitment to openness is for real. Indeed, Trimble has shaped its own ecosystem in line with the motto ‘No siloed data anymore!’. This is all part of opening up the overarching physical-digital-physical workflow. Schwartz: “We aim to lower the barrier of getting professionals working with our technology.”
All that data needs to be stored securely somewhere. “Everyone will agree that data is the foundation of every project,” said Chris Peppler, vice president product & platform, during the opening keynote
It became clear during this year’s edition of Trimble Dimensions that the future lies in connected, intelligent systems which address the most pressing challenges – not only in the construction industry, but also in other sectors. Geospatially skilled professionals play a pivotal role in this transformation, as their expertise ensures the accurate capture, analysis and application of the data required to maximize the potential of the emerging AI-powered solutions. By delivering practical applications with measurable outcomes, Trimble aims to set a benchmark for more efficient, cost-effective and data-informed construction practices. But as Mark Schwartz, senior vice president, AECO software, commented, the company is purposely not moving too fast: “We are delivering AI solutions at a pace and in a way that keeps it manageable for users.”
Accessible data
Throughout Trimble Dimensions 2024, one thing that struck me
session, adding that “only a few years ago, construction was sceptical about the cloud”. Fortunately, he noted, this has changed since then, and Trimble’s own ability to navigate all the data is also continuously improving in line with the rapid pace of advancement.
An event with many dimensions
Trimble Dimensions 2024 included many other highlights, such as the presence of Brendan Hunt – actor, writer and co-creator of the hit series Ted Lasso – who took the stage as a featured speaker during Rob Painter’s keynote. Not all the topics can be mentioned here due to space limitations, but they provided plenty of inspiration for future articles in GIM International, so rest assured that they will receive sufficient coverage over the coming months. In conclusion, the 2024 edition of Trimble’s annual gathering in Las Vegas was once again characterized by great meetings, conversations, presentations and other forms of acquiring and exchanging knowledge. It was indeed an event with many dimensions!
Seafloor Systems was among the many partners showcasing their products through live demonstrations at the Offsite Expo.
Trimble’s experts at the Offsite Expo provided a hands-on experience, testing solutions in real-world scenarios and simulated projects for construction and surveying.
Kuching, Sarawak. (Image courtesy: Shutterstock)
FIG Commission 7 Annual Meeting on Cadastre and Land Management
The FIG Commission 7 (C7) Annual Meeting (AM) was held in Kuching, Sarawak, Malaysia, at the Riverside Majestic Hotel from 2426 September. This year’s edition of this significant event was hosted by the University Technology Malaysia and the Sarawak Survey Department. The meeting was convened by Prof Alisa Abdul Rahman, and was opened by the Premier of Sarawak. The gathering fostered collaboration and the exchange of ideas among 250 land and surveying professionals in the field, who enjoyed the beautiful riverside setting and the rich cultural backdrop of Kuching, which is known for its vibrant history and diverse heritage.
The C7 AM was part of Geoinformation Week Malaysia 2024, bringing together several big events in one location. This created a dynamic platform for knowledge sharing and collaboration. The week included the Land Administration Domain Model (LADM) & 3D Land Administration International Workshops, the FIG Commission 5 (C5) Annual Meeting, and discussions on the UN Habitat Social Tenure Domain Model (STDM). Additionally, a pre-conference tutorial programme took place on 23 September that included a full-day Land Administration for Climate Action event hosted by Kadaster
Netherlands and ITC University of Twente. The C7/C5 and 3D/LADM events were characterized by a vibrant atmosphere for exchanging ideas and best practices in geoinformation and land administration, and attracted approximately 50 attendees.
Themes and topics that were covered during the Annual Meeting were:
• Fit for Purpose Land Administration (FFPLA)
• Women’s Access to Land (and Administration)
• The UN’s Framework for Effective Land Administration (FELA)
• Artificial Intelligence and Remote Sensing Advances (AI&RS4LA)
• 3D Land Administration (3DLA) and Digital Twins (in the LADM stream)
• The Land Administration Domain Model (LADM)
• Digital Transformation in Land Administration (including integration with Land Use Planning)
• Cybersecurity, National Security; Sovereignty; Cadastral Data Quality and Open Data Education in Land Administration
• Comparative Cadastres
Key takeaways
• FIG can support land agencies in advocating for the role of land administration systems in delivering national security and sovereignty through transaction controls and transparency
• Land information systems are increasingly a security risk in themselves, due to digitalization
• Asia continues to rapidly rise economically and technologically supported by land administration, although many countries lag with endemic capacity and institutional problems. Climate change response can unite
• Sarawak provides an excellent example of how land tenure, value, planning and development functions are integrated into a single agency
• Sarawak Survey Department, backed by central Malaysian government, is playing a leading role in surveying and registering communal lands, free for those communities.
More information www.fig.net
The range and relevance of the ICA commissions
The International Cartographic Association (ICA) is a leading global organization dedicated to advancing the science and art of cartography. Founded in 1959, the ICA aims to promote research, education and application of cartographic knowledge through diverse, specialized commissions. These commissions reflect the expansive range of contemporary cartographic interests and technologies, from traditional mapmaking techniques to modern digital and interactive mapping solutions. As the field of cartography has evolved to encompass various forms of spatial data representation, these commissions have become increasingly relevant in addressing the changing needs of society, technology and science.
Range of ICA commissions
The ICA commissions cover an extensive array of cartographic and geospatial topics, each focusing on specific areas of research, development and application. Some prominent commissions include:
1. Education and Training: This commission addresses the growing need for cartographic education, both in formal institutions and through professional development programmes. It is instrumental in developing curricula, teaching materials and standards for cartography at different educational levels.
2. Map Design: With cartographic visualization at its core, this commission explores principles and methods for effective map design, focusing on elements like aesthetics, readability and user engagement. It encourages innovation in the visual representation of data, crucial for maps that are both informative and engaging.
3. Cartography and Sustainable Development: This commission promotes the role of maps in supporting global sustainability efforts. By developing cartographic practices aligned with the United Nations’ Sustainable Development Goals (SDGs), the commission fosters awareness, facilitates datadriven decision-making and encourages responsible resource management worldwide.
4. Cartography in Early Warning and Crisis Management: Recognizing the importance of maps in responding to emergencies, this commission develops methods for using maps to communicate risks and support crisis management efforts. The commission’s work is particularly relevant for natural disaster preparedness and response, highlighting the critical role of spatial data in saving lives.
5. User Experience: This commission focuses on optimizing map usability, accessibility and engagement. By studying how users interact with maps, it develops guidelines for effective design, improving functionality and intuitiveness across digital and physical cartographic products, ultimately enhancing user satisfaction and decision-making.
6. Geovisualization: This commission advances techniques for visually representing complex spatial data, enabling users to interpret geographic patterns, trends and relationships more effectively. Focusing on dynamic, interactive and multidimensional mapping, it supports innovation in visual analytics, aiding fields like urban planning, environmental monitoring and data science.
Relevance of ICA commissions
The relevance of the ICA’s commissions lies
in their ability to address both contemporary challenges and future trends in cartography. With the explosion of spatial data and its centrality in everything from business logistics to climate science, cartography has become indispensable across multiple sectors. The commission structure allows for focused attention on specific issues, fostering innovations that impact policy, education, public awareness and environmental management.
For instance, in the field of public health, spatial data has become critical for tracking disease outbreaks and optimizing resource allocation. The ICA’s emphasis on crisis management maps plays a crucial role in equipping decision-makers with tools to quickly visualize and respond to health emergencies. Additionally, commissions focusing on digital mapping and data visualization enhance public access to data through user-friendly interfaces, promoting transparency and understanding among diverse audiences.
Commissions such as Map Design and User Experience are significant in today’s digital environment where maps are ubiquitous across digital devices. This focus helps ensure that cartographic products are not only accurate but also user-centred, facilitating better decision-making and navigation for the general public.
The ICA’s commissions also address ethical considerations, such as privacy in mapping technologies, accuracy and bias, which are essential as maps are increasingly used in decision-making. Ethics and SDG-focused commissions drive efforts to align cartography with global sustainability initiatives, reinforcing the ICA’s commitment to socially responsible cartographic practices.
In summary, the ICA commissions embody the vast and varied nature of cartography as a discipline, allowing for specialized research and practical applications that address the evolving demands of a data-driven world. The commissions enhance the field’s relevance by ensuring that cartographic science and practices stay responsive to societal needs, technological advancements and ethical considerations.
More information https://icaci.org/commissions
When science meets sustainable development objectives
ISPRS publishes a number of leading journals on the topics of Earth observation and geoinformation. They provide a channel of communication for scientists and professionals working in relevant disciplines, helping them to make an impact by sharing their knowledge.
ISPRS Journal of Photogrammetry and Remote Sensing
The ISPRS Journal of Photogrammetry and Remote Sensing (IJPRS), the historical and flagship journal of ISPRS, has an Impact Factor of 10.6 and a cite score of 21. This journal offers authors two choices to publish their research: the first is through journal subscription, making articles available to the journal subscribers only; the second is through Golden open access with an article processing charge of CHF3,310, making articles freely available to the wider public in addition to journal subscribers.
In 2023, 259 papers were published with a rejection rate of 88.6%. There was a 10% increase in submissions from 2022 to 2023, while the number of accepted papers remained stable, showing that the high quality of the accepted papers was preserved. From 2023 to 2024, there was a further 19% increase in submissions, underlining the attractiveness of this journal in a competing field.
Illustrating the coverage by IJPRS of key topics, three special issues were recently published in very disparate fields: ‘innovations in SAR image analysis’, ‘planetary remote sensing’, and ‘imagery analytics for understanding human-urban infrastructure interactions’. Another issue is still open: ‘vision language models for remote sensing analysis and interpretation’. The topics for all special issues are carefully chosen to ensure maximum relevance, and especially target emerging or interdisciplinary topics.
Golden Open Access ISPRS Open Journal of Photogrammetry and Remote Sensing
The Golden Open Access ISPRS Open Journal of Photogrammetry and Remote Sensing, launched in 2021, successfully applied for indexing in Scopus. With a first cite score of 5.5 (2023), the journal is already ranked among the top 25% of journals in the categories ‘Earth and planetary sciences (miscellaneous)’ and ‘geography, planning and development’.
A special issue on ‘topographic mapping from space’, edited by the organizers of the upcoming Karsten Jacobsen workshop, will soon be open for submissions. The current article processing charge for papers is CHF1,800. Authors of articles submitted by latest 31 December 2024 will receive a 25% discount on article processing charges.
ISPRS Journal of Photogrammetry and Remote Sensing, along with the open access version.
An additional 50% discount applies to submissions for the special issues.
ISPRS International Journal of GeoInformation
The ISPRS International Journal of GeoInformation (IJGI) focuses only on geospatial information sciences. It has an Impact Factor of 2.8 and a cite score of 6.9.
IJGI is fully owned by ISPRS and upholds the same quality criteria as the other ISPRS journals. Likewise, the journal development is discussed with the publisher, but the scientific policy and development are discussed and decided only within ISPRS together with the editor-in-chief, to ensure the high scientific quality of the journal (e.g. by reducing the number of special issues).
In 2023, the rejection rate for the regular submissions was 75%. So far, in 2024, the rejection rate is over 80%. In 2023, 176 papers were published, and this figure is likely to be close to 200 in 2024. For papers submitted until 31 December 2024, the article processing charge is CHF1,700.
More information https://isprsistanbul2025.org/
TRIMBLE R980 GNSS SYSTEM
Elevate survey productivity with unmatched performance and Trimble trusted workflows. With a full suite of enhanced connectivity features, the Trimble ® R980 GNSS system sets a new benchmark in versatility and enables you to tackle the most demanding GNSS environments to get the job done.