4 minute read
3.1.2 Subsurface Defects Detection
(b). Output Image
Figure 13. Output example of proposed work
3.1.2 Subsurface Defects Detection Subsurface defects typically result in corrosion-induced deterioration of reinforcement. Its continuous development can cause the loss of structural integrity [25]. Therefore, delamination is one of the most critical defects assessed during the bridge deck inspections. Traditional delamination detection methods, such as hammer sounding and chain dragging, are laborintensive, time consuming, and require maintenance and protection of traffic (MPT). Infrared Thermography (IR) has been developed to detect delamination. It provides fast and effective inspections with reasonable accuracy [26] [27] [28]. The internal delamination acts like an insulating layer that causes different heating and cooling rates from the surrounding concrete. The thermal contrast in infrared images (IR) can be identified as delamination [29]. This section proposed a semantic image segmentation using a novel deep learning architecture to detect the location of subsurface defects and quantify the defected areas from IR images. The proposed method associates each pixel in infrared images with a class label (defected or sound), in which defected areas are segmented from the sound concrete areas and thus can be localized and calculated. To achieve this goal, a pixel-level separation is required. Semantic pixel-wise segmentation is a state-of-art technique that has been used in many studies [30] [31] for image segmentation. Since the complexity of infrared images is simple, semantic pixel-wise segmentation is promising to result in acceptable outcomes. Deep learning network structures Xception backboned DeepLabV3+ is selected to be used in this study. In the following section, details of the proposed method are provided.
To test the proposed method, data collected by McLaughlin et al. [32] and data collected from BEAST are used in this study. Unmanned ground and arial vehicles with thermal cameras were used to collect 700 infrared images (size 512 x 640 pixels) from four reinforced concrete bridges. The dataset contains 361 images with subsurface defects and 339 without subsurface defects. Images are resized to 256 x 256 pixels to train the network, and a data augmentation technique is used to generate more data. Figure 14 shows examples of the images. Since the proposed method detects subsurface defects at a pixel level, each training image is labeled at the pixel level as one of two classes: defected or sound with expertise advising. A pixel-wise image labeling toolbox in MATLAB [33] was used to label the images. The labeling process was supervised by experts as shown in Figure 15.
Figure 14. Top row: images with subsurface defects, bottom: images without subsurface defects
Figure 15. Pixel-wise Labeling for Data
The network is developed in python 3.7 with TensorFlow modules [34]. The datasets are split into training (75%), validation (15%), and testing (15%) datasets. The results are evaluated by loss, accuracy and mean intersection over union (mIoU). IoU is a commonly used metric to evaluate segmentation accuracy and is defined by the following equation:
IoU=
TP TP+FP+FN (3)
Where TP represents true positives, FP and FN are false positives and false negatives.
Loss, accuracy and mIoU are calculated and tracked for each epoch for training and validation. Figure 16 shows the training and validation metrics. As shown in Figure 16(d), the best accuracy and mIoU for training are 99.36% and 0.98. The best accuracy and mIoU for validation are 97.96% and 0.96, respectively. The best model is saved to be applied to the unseen testing dataset. The test accuracy and mIoU are 97.83% and 0.95. Figure 17 shows examples of detected subsurface defect areas in the image. Images in the top row are input IR images with defected areas. Images in the bottom row are the output from the proposed network, where white pixels are detected defected areas.
Figure 16. (a). Loss value, (b). Accuracy, (c). Mean intersection over union, (d). The best accuracy and mIoU achieved for training, validation and testing
Figure 17. Top: input IR images, Bottom: detected and quantified areas with subsurface defects (white pixels)
To further implement the developed model, a sliding window can be applied to large scale IR images. The delaminated areas in each local window can be detected and the total delaminated area in the large image can be calculated. As shown in Figure 19, small images are stitched together into a large image with size 1536 x 2560 pixels. The trained model is applied by using a sliding window technique to scan through the large image. The output image shows the detected and quantified delamination areas. Details of methodology are found in our publication [35] or Appendix III.
Figure 19. Implementation of the Proposed Method