www.as‐se.org/ssms Studies in Surveying and Mapping Science (SSMS) Volume 2, 2014
High Resolution Imagery and Three‐line Array Imagery Automatic Registration for China’s TH‐1 Satellite Imagery Qing Xu1, Chaozhen Lan2, Xun Geng*3, Shuai Xing4, Dong Wang5, Pengcheng Li6 Zhengzhou Institute of Surveying and Mapping, 450052 Zhengzhou, China 1
xq@szdcec.com; 2lan_cz@163.com; *3gengxun.rs@gmail.com; 4xing972403@163.com; 5jfj_dongfeng@163.com; 6 lpclqq@163.com Abstract An automatic image registration method of high resolution (HR) imagery and three‐line array imagery for China’s TH‐1 mapping satellite is invented. The 2m resolution HR imagery is normalized to 5m resolution three‐line array imagery firstly. Then using precise point prediction model (P3M) matching method, thousands of correspondent points can be matched. Based on these matched points, feature points collected on HR imagery can be converted onto three‐line array imagery automatically. Consequently, most of the feature collection work can be realized on HR imagery without stereo measurement devices which are helpful to derive 1:50 000 scale topographic map efficiently. Experiment results demonstrate our method’s feasibility. Keywords TH‐1 Satellite Imagery; Three‐line Array Imagery; Image Registration; Precise Point Prediction Model (P3M)
China's TH-1 Satellite Imagery China’s first stereo mapping satellite—TH1 was launched August 24th 2010. TH‐1 is equipped with one High Resolution (HR) Camera and one Three‐line CCD array Camera (see Figure 1). The ground sample distance (GSD) of HR imagery is 2m while the GSD of three‐line array imagery is 5m. The stereo viewing angles of three‐line array CCD camera is 25 degree. The base to height ratio is 1.0 which can provide a better height positioning accuracy. Both the three‐line array imagery and HR imagery’s swath width are 60km. The image size of three‐line array imagery is 12000 12000 and the image size of HR imagery is 32000 32000. The imagery performance of TH‐1 satellite imagery is summarized in Table 1.
FIG. 1 HR AND THREE‐LINE ARRAY CAMERA OF TH‐1
The planemetric positioning accuracy without GCPs of TH‐1 satellite imagery is 15m and the height positioning accuracy is 6m [1]. Currently, the photogrammetric processing for DEM and DOM derivation are automatic procedure, while feature collection of DLG still costs a lot of manual work. Due to the fact that the image resolution of HR imagery is higher than that of three‐line array imagery, it is better to collect features on HR imagery. However, HR imagery has no stereoscopic cover capability and is mainly used for viewing purpose. In this paper, an automatic image registration method of HR imagery and three‐line array imagery is invented. Based on our method, feature
32
Studies in Surveying and Mapping Science (SSMS) Volume 2, 2014 www.as‐se.org/ssms
points collected on HR imagery can be converted onto three‐line array imagery automatically. TABLE 1 TH‐1 SATELLITE IMAGERY PARAMETER
Three‐line Array Imagery
High Resolution Imagery
GSD
5m
Swath Width
60km
Stereo Angle
25 degree
B/H
1
Radiometric resolution
10 bit
Image Size
12000 12000
GSD
2 m
Swath Width
60km
Radiometric resolution
8bit
Image Size
32000 32000
High Resolution Imagery and Three-line Array Imagery Registration The basic principle of our image registration method is described as following: 1) Match a number of correspondent points between HR imagery and three‐line array imagery firstly. 2) Assisted by the matched points, the feature points on HR imagery are converted onto three‐line array imagery. 3) The feature points transformation procedure is indeed an image matching procedure which is realized by Precise Point Prediction Model (P3M) matching method (see below). Precise Point Prediction Model (P3M) Matching Method In local area, the image relationship between two stereo images can be expressed as an affine transformation model approximately.
(1)
Give an image point P on image I1, the correspondent point of P on Image I2 can be predicted by the affine transformation model. Normally, four neighbouring matched points around point P are enough to solve the unknown parameters of the affine transformation model. Apparently, the more the neighbouring matched points, the higher the prediction accuracy can be achieved. Moreover, we can use the distance between the neighbouring matched points and image point P to set the search window size. Therefore, the P3M matching model can be described below. , 3 : Point Prediction: Set Search Window
,
2
Here, we use the matching strategy of selecting good feature points to match firstly and matching more and more correspondent points by levels. In last, most of good feature points can be matched and can be used to construct P3M matching model. The P3M matching procedure is summarized as following. Step 1: The brightness and contrast of image I1 and image I2 are adjusted in image pre‐processing procedure. Step 2: Use SURF algorithm [2] to match initial correspondent points. Step 3: Extract feature points from image I1 by Shi‐Tomasi algorithm [3]. At first matching procedure, the number of feature points can be relatively small, while at last matching procedure, the number of feature points should be large. Step 4: For the extracted feature point P on image I1, use KD‐Tree to search the neighbouring matched points and P3M matching model is established. Step 5: Use P3M matching model to predict the correspondent point and use NCC measure [4] to match image points. Step 6: Adjust feature points extraction parameters and image matching parameters to perform image matching iteratively until most of the feature points are matched. Step 7: With the matched feature points, given an image point on Image I1, there are enough neighbouring matched points around it and the prediction accuracy can be very high (1~3 pixels). Consequently, the exact correspondent point can be found quickly and accurately.
33
www.as‐se.org/ssms Studies in Surveying and Mapping Science (SSMS) Volume 2, 2014
FIG. 2 PRECISE POINT PREDICTION (P3M) MATCHING METHOD
Search Neighbouring Matched Points In order to construct P3M matching model, the neighbouring matched points around an image point should be searched firstly. An image point Pi lies on image I1 and its correspondent image point Qi lies on Image I2. Firstly, the neighbouring matched points around Pi are searched in R0. If there are enough matched points to construct P3M matching model (more than four points), the search procedure will stop. While if the searched neighbouring matched points are less than four, the search radius should be enlarged and the neighbouring matched points are searched in R1. Continue this iterative search procedure until enough matched points are found. We call this searching procedure as brute‐force searching method. Apparently, when there are several hundred of matched points, the brute‐force searching method may work well. However, after several image matching procedure are performed, there may be thousands of or even millions of matched points. So the algorithm used to search neighbouring matched points must be optimized. In this paper, KD‐Tree [5] is used to search neighbouring points. The neighbouring matched points searching and P3M prediction efficiency by KD‐Tree are given in Table 2. Experiment results demonstrate that even if there are a large number of matched points which serve as control points for P3M Matching model, searching time and calculation time used to predict 2,000,000 points are still acceptable. TABLE 2 SEARCH NEIGHBOURING MATCHED POINTS
Known Matched Points
18683
81674
Points to be matched (10000) 1
Searching Time (s) 0.327
P3M Prediction Time (s) 0.851
2
0.573
1.501
5
1.245
3.090 5.264
10
2.094
20
3.792
9.144
1
0.703
1.966
2
1.310
3.762
5
2.921
7.894
10
4.758
13.217
20
7.681
20.873
Image Normalization In section 2.1, we introduce the P3M matching model and the NCC is used to match correspondent points. However, as shown in table 1, the GSD and image size of HR imagery and three‐line array imagery are different. Therefore, we
34
Studies in Surveying and Mapping Science (SSMS) Volume 2, 2014 www.as‐se.org/ssms
should perform image normalization firstly. The three‐line array imagery can be named as SXZ1, SXZ2 and SXZ3. Take HR imagery and SXZ1 imagery as example, the image normalization procedure are described as following: Step 1: The image pyramid of HR imagery and SXZ1 imagery are generated respectively. Step 2: The level 4 image pyramid of HR imagery (2000 2000) and level 3 image pyramid of SXZ1 imagery (1500 1500) are matched by SURF algorithm. Step 3: Use the matched points by SURF, the overlap area between HR imagery and SXZ1 imagery and the normalization scale can be calculated. Step 4: The HR imagery is tailored and scaled to the same image size as SXZ1 and two normalized images are generated. In reality, the image matching is performed on these two normalized images. Semi-automatic Feature Collection on High Resolution Imagery Through image normalization and P3M matching method, the image registration of HR imagery and three‐line array imagery can be realized. In order to calculate the 3D point coordinates of a feature point, HR imagery can be matched to SXZ1 and SXZ3 respectively. Consequently, after a feature point is collected on HR imagery, the correspondent image point on SXZ1 and SXZ3 image can be predicted and matched and the semi‐automatic feature collection can be realized on HR imagery. The procedure can be summarized as following: Step 1: HR imagery is matched to SXZ1 and SXZ3 respectively. Step 2: Construct P3M matching model for HR‐SXZ1 and HR‐SXZ3 image pairs. Step 3: Collect feature points on HR Image. Step 4: Using P3M matching method, the feature points on HR imagery are converted onto SXZ1 and SXZ3 imagery. Step 5: Through forward intersection, the 3D point coordinates of the feature points can be calculated.
FIG. 4 SEMI‐AUTOMATIC FEATURE COLLECTION ON HR IMAGERY
Experiments The TH‐1 satellite imagery acquired on January 21st 2013 was tested. Using our image registration method, HR imagery was matched to SXZ1 and SXZ3 by iterative procedure. Table 3 and Figure 5 give the image registration result of HR imagery and SXZ1 imagery. From the image registration experiment results, some conclusions can be drawn. 1) Using P3M matching method, feature points can be matched iteratively.
35
www.as‐se.org/ssms Studies in Surveying and Mapping Science (SSMS) Volume 2, 2014
2) When there are enough matched points, more than 90% of feature points can be matched which indicates that our method can achieve a high matching success rate. 3) 50 000 points can be matched in 317.9 seconds which demonstrate that the matching efficiency is acceptable. 4) When there are enough matched points served as control points, the search window size can be set to 7×7, Which indicates that our P3M matching method can predict correspondent points accurately. TABLE 3 IMAGE REGISTRATION OF HR IMAGERY AND SXZ1 IMAGERY
Iterative Matching Times 1
Points To be Matched 1000
Matched Points
Success Rate
Time Used(s)
Search Window Size
882
88.2%
121.3
29 29
2
5000
4665
93.2%
134.6
19 19
3
50000
46305
92.6%
317.9
7 7
After HR imagery was matched to SXZ1 imagery and SXZ3 imagery respectively. With the matched feature points served as control points, feature collection work can be performed on HR imagery and then converted to three‐line array imagery. Figure 6 gives some typical features extracted by the semi‐automatic feature collection method. The left side is HR imagery, the right top side is SXZ1 imagery and the right bottom side is SXZ3 imagery. The feature points were collected on HR imagery only and converted to SXZ1 and SXZ3 imagery simultaneously.
FIG. 5 IMAGE REGISTRATION OF HR AND SXZ1 IMAGERY
(a)Village (b) River FIG. 6 FEATURE POINTS COLLECTED ON HR IMAGERY AND CONVERTED TO THREE‐LINE ARRAY IMAGERY
Conclusion In this paper, an automatic image registration method of HR imagery and three‐line array imagery for China’s TH‐1 Mapping satellite is invented. Based on our method, most of the feature collection work can be performed on HR imagery which can greatly decrease the manual work on 3D stereoscopic measurement devices. ACKNOWLEDGMENT
This paper was funded by the National Basic Research Program of China (973 Program) (2012CB720000) and National Natural Science Foundation Project (41371436) of China and State Key Laboratory of Geo‐information Engineering of China (SKLGIE2013‐M‐3‐5). REFERENCES
[1]
Renxiang Wang, Xin Hu, Jianrong Wang. “Photogrammetry of Mapping Satellite‐1 without Ground Control Points”. Acta Geodaetica et Cargographica Sinica, 42(2013): 1‐5.
[2]
H Bay, T Tuytelaars, L V Gool. “Surf: Speeded up Robust Features”. Paper presented at the Proceeding of the 9th European Conference on Computer Vision, 2006, 404‐417.
[3]
36
Jianbo Shi, Carlo Tomasi. “Good Features to Track”. Paper presented at the IEEE Conference on Computer Vision and Pattern
Studies in Surveying and Mapping Science (SSMS) Volume 2, 2014 www.as‐se.org/ssms
Recognition, Seattle, 1994. [4]
Zuxun Zhang, Jianqing Zhang. Digital Photogrammetry. Wuhan: Wuhan University Press, 2007.
[5]
A C Silpa, R Hartley. “Optimized KD‐trees for fast image descriptor matching.” Paper presented at the IEEE Conference on
Computer Vision and Pattern Recognition, 2008. Qing Xu received the B.S., M.S. and Ph.D. degrees in Photogrammetry and Remote Sensing from Zhengzhou Institute of Surveying and Mapping (ZISM), Zhengzhou, China in 1985, 1990 and 1995 respectively. He currently is the professor of ZISM. His research interests include digital photogrammetry, 3D visualization and planetary mapping. He is a Participating Scientist of the Chang’E Lunar Exploration of China. Chaozhen Lan received the B.S., M.S. and Ph.D. degrees in Photogrammetry and Remote Sensing from ZISM in 2002, 2005 and 2009, respectively. He currently is the assist professor of ZISM. His research interests include digital photogrammetry and 3D visualization.
37