Patch-Based Semantic Labeling of Road Scene Using Colorized Mobile L i D A R Point Clouds
Abstract: Semantic labeling of road scenes using colorized mobile LiDAR point clouds is of great significance in a variety of applications, particularly intelligent transportation systems. However, many challenges, such as incompleteness of objects caused by occlusion, overlapping between neighboring objects, interclass local similarities, and computational burden brought by a huge number of points, make it an ongoing open research area. In this paper, we propose a novel patchbased framework for labeling road scenes of colorized mobile LiDAR point clouds. In the proposed framework, first, three-dimensional (3-D) patches extracted from point clouds are used to construct a 3-D patch-based match graph structure (3DPMG), which transfers category labels from labeled to unlabeled point cloud road scenes efficiently. Then, to rectify the transferring errors caused by local patch similarities in different categories, contextual information among 3-D patches is exploited by combining 3D-PMG with Markov random fields. In the experiments, the proposed framework is validated on colorized mobile LiDAR pointclouds acquired by the RIEGL VMX-450 mobile LiDAR system. Comparative experiments show the superior performance of the proposed framework for accurate semantic labeling of road scenes.