UNIVERSITY OF PITTSBURGH | SWANSON SCHOOL OF ENGINEERING | CIVIL & ENVIRONMENTAL
S U M M A RY R E P O R T • I R I S E - 2 1- P 2 0 - 0 2 - 01 DECEMBER 2021
Improving Bridge Assessment through the Integration of Conventional Visual Inspection, Non-Destructive Evaluation, and Structural Health Monitoring Data
IRISE Consortium Impactful Resilient Infrastructure Science and Engineering
Technical Report Documentation Page 1. Report No. FHWA-PA-2021-012-IRISE WO 01
2. Government Accession No.
3. Recipient’s Catalog No.
4. Title and Subtitle
5. Report Date
Improving Bridge Assessment through the Integration of Conventional Visual Inspection, Non-Destructive Evaluation, and Structural Health Monitoring Data
6. Performing Organization Code
7. Author(s): Amir H. Alavi, Qianyun Zhang, Saeed Babanajad, Franklin Moon, John, Braley, Nenad Gucunski
8. Performing Organization Report No. IRISE-21-P20-02-01
9. Performing Organization Name and Address
10. Work Unit No. (TRAIS)
Department of Civil and Environmental Engineering University of Pittsburgh 3700 O'Hara Street Pittsburgh, PA 15261
December, 2021
11. Contract or Grant No. IRISEPITT2018
Wiss, Janney, Elstner Associates (WJE) Inc. 330 Pfingsten Rd. Northbrook, IL 60062 Center for Advanced Infrastructure and Transportation (CAIT) Rutgers, The State University of New Jersey 100 Brett Rd. Piscataway, NJ 08854 12. Sponsoring Agency Name and Address
13. Type of Report and Period Covered
The Pennsylvania Department of Transportation Bureau of Planning and Research Commonwealth Keystone Building 400 North Street, 6th Floor Harrisburg, PA 17120-0064
14. Sponsoring Agency Code
15. Supplementary Notes 16. Abstract The purpose of this study was to establish a framework capable of integrating traditional non-destructive evaluation (NDE) and emerging automated unmanned aerial vehicle (UAV)-based techniques to provide improved performance assessment of bridges. The framework focuses on addressing the principal challenges associated with studying the service life of bridge structures: (a) the long time scales (which requires accelerated aging), (b) the diverse outputs related to bridge condition (in terms of data collected through UAV, NDE, and visual inspection), and (c) an advanced data interpretation and fusion framework for automated detection and quantification of bridge surface and subsurface defects. By leveraging the access to the unique dataset generated by the Bridge Evaluation and Accelerated Structural Testing (BEAST) facility, this study aimed to identify the potential synergies among bridge degradation, remaining service life, and the results taken from the multimodal sensing technologies (i.e., UAV and NDE techniques). Data processing frameworks based on deep learning and a systematic UAV data collection strategy were developed to automatically detect the surface defects from HD images and subsurface defects from Infrared thermography (IR) images. New multi-source NDE data fusion methods based on discrete wavelet transforms and improved Dempster-Shafer evidence combination theory were proposed to provide a more comprehensive concrete bridge deck assessment. 17. Key Words Bridge performance assessment; Unmanned aerial vehicle, Automated bridge inspection, Non-destructive evaluation; Bridge service life; Accelerated bridge testing; Machine learning; Multi-sensor data fusion.
18. Distribution Statement No restrictions. This document is available from the National Technical Information Service, Springfield, VA 22161
19. Security Classif. (of this report)
20. Security Classif. (of this page)
21. No. of Pages
Unclassified
Unclassified
100
Form DOT F 1700.7
(8-72)
22. Price
Reproduction of completed page authorized
IRISE The Impactful Resilient Infrastructure Science & Engineering consortium was established in the Department of Civil and Environmental Engineering in the Swanson School of Engineering at the University of Pittsburgh to address the challenges associated with aging transportation infrastructure. IRISE is addressing these challenges with a comprehensive approach that includes knowledge gathering, decision making, material durability and structural repair. It features a collaborative effort among the public agencies that own and
Acknowledgements The authors gratefully acknowledge the support of all members of IRISE. We are especially indebted to the Pennsylvania Department of Transportation and the Federal Highway Administration for their sponsorship of the project, and to the advice and assistance provided by the Project Technical Advisory Panel: Messrs. Keith Cornelius, Tom Macioce and Rich Runyen, PennDOT, Mr. Jonathan Buck, Federal Highway Administration Mr. Mike Burdelsky, Allegheny County, and Mr. Mike Pichura, Michael Baker International.
operate the infrastructure, the private
Disclaimer
companies that design and build it and the
The contents of this report reflect the views
academic community to develop creative
of the authors who are responsible for the
solutions that can be implemented to meet
facts and the accuracy of the data presented
the needs of its members. To learn more,
herein. The contents do not necessarily
visit: https://www.engineering.pitt.edu/irise/.
reflect the official views or policies of the US Department of Transportation, the Federal Highway Administration, the Commonwealth of Pennsylvania or any other IRISE member at the time of publication. This report does not constitute a standard, specification or regulation.
Table of Contents 1. Introduction ............................................................................................................................................. 6 1.1 Background ......................................................................................................................................... 6 1.2 Problem Statement .............................................................................................................................. 7 1.3 Research Plan ...................................................................................................................................... 7 2. Introduction of Bridge Evaluation and Accelerated Structural Testing (BEAST) ......................... 10 3. Proposed Two-Stage Bridge Inspection Framework ......................................................................... 11 3.1 Data-Driven Vision-Based Bridge Inspection................................................................................... 11 3.1.1 Surface Defects Detection .......................................................................................................... 11 3.1.2 Subsurface Defects Detection .................................................................................................... 20 3.1.3 Software for Automated Vision Based Bridge Inspection ......................................................... 24 3.1.4 Discussion and Future Work of Automated Vision-based Inspection ....................................... 29 3.2 Nondestructive Evaluation and Multi-sensor Data Fusion ............................................................... 30 3.2.1 BEAST Data Collection ............................................................................................................. 30 3.2.2 BEAST Data Processing ........................................................................................................... 32 3.2.2 Multi-sensor Data Fusion for NDE Results ............................................................................... 32 3.3 Comparison Between Vision Based and NDE Based Inspection Results ......................................... 36 3.4 3D Visualization of Bridge Inspection Results ................................................................................. 38 4. UAV Data Collection Strategy ............................................................................................................. 39 4.1 Data Collection Plan ......................................................................................................................... 39 4.2 Results and Discussion...................................................................................................................... 41 5. Dissemination, Integration, and Interpretation of BEAST Data ...................................................... 44 5.1 Timing Scale of BEAST Experiment................................................................................................ 45 5.2 Development of Performance Measure for A Representative PA Bridge ........................................ 48 5.3 Development of NDE Performance Indicators ................................................................................. 50 5.3.1 Impact Echo (IE) ........................................................................................................................ 50 5.3.2 Ground-Penetrating Radar (GPR) .............................................................................................. 53 5.3.3 Electrical Resistivity (ER) ......................................................................................................... 56 5.3.4 Ultrasonic Surface Wave (USW) ............................................................................................... 58 5.3.5 Half Cell Potential (HCP) .......................................................................................................... 60 5.3.6 Multi-sensor Data Fusion for NDE ............................................................................................ 61 5.4 Drone-based HD and IR Image Integration ...................................................................................... 62 5.6 NDE Data Analysis Summary and Conclusion ................................................................................ 63
6. Structural Inspection ............................................................................................................................ 64 7. Cost-Benefit Analysis of Proposed Bridge Inspection Framework .................................................. 66 7.1 Cost-Benefit Analysis of Vision Based Bridge Inspection ............................................................... 66 7.2 Cost-Benefit Analysis of NDE Bridge Inspection ............................................................................ 68 8. Conclusion and Future Work .............................................................................................................. 70 9 Appendix I: Methodology for single-class crack classifier ................................................................. 72 10. Appendix II: Methodology for multi-class classifier for crack and spall detection ...................... 78 11. Appendix III: Methodology for subsurface defects detection ......................................................... 80 12. Appendix IV: Multi-resource NDE Data Fusion Method ............................................................... 82 12.1 Discrete Wavelet Transform Based Image Fusion.......................................................................... 82 12.2 Improved D-S Theory ..................................................................................................................... 83 13. Appendix V: Methodology of 3D model development ..................................................................... 85 13.1 Aerial Image Processing ................................................................................................................. 85 13.2 BEAST Facility Digital Models ..................................................................................................... 87 14. Appendix VI: UAV data collection plans .......................................................................................... 89 14.1 Equipment Specifications .............................................................................................................. 89 14.2 Drone Altitude ............................................................................................................................... 89 14.3 General Framework ...................................................................................................................... 90 14.4 Drone data collection Plan for IR images ..................................................................................... 90 15. References ............................................................................................................................................ 92
1. Introduction 1.1 Background According to the American Society of Civil Engineers (ASCE) report card [1], over 56,000 bridges in the United States are in “poor condition”. The number of concrete bridges with deteriorating surfaces is over 180,000. Economic and effective management of aging bridges becomes a challenging task for state departments of transportation and government and requires a sufficient understanding of how a bridge’s performance is conceived, assessed, and preserved. Bridge performance, in general, is not well understood or defined. In practice, bridge engineers and decision-makers rely heavily on expert opinion and generalizations to define and forecast performance. Although these approaches have served the profession and research community well, with the move towards more advanced asset management strategies to reduce life-cycle costs, they are no longer sufficient. Rather, modern and quantitative management systems demand more reliable performance models and more accurate and objective estimates of the effectiveness of various interventions (i.e., maintenance, preservation, and rehabilitation). To date, bridge deterioration is almost exclusively studied using either (a) direct observations of the performance of operating bridges (using visual inspection, non-destructive evaluation (NDE), sensing, etc.), or (b) material level tests that operate on small-scale specimens. The depth of information collected by visual inspection, physical inspection via hammer sounding and chain dragging, NDE, and material sampling, can be used to make informed decisions and predictions of the bridge remaining service life. This data is also key in making effective maintenance and rehabilitation decisions for bridge owners. However, collecting such data from the entire bridge network in a periodic manner is nearly impossible given the financial and operational limitations. The financial restraints are mainly associated with the high unit cost of data collection and processing. The costs can range anywhere from 10-40 USD dollars per square foot, depending on the type(s) of visual inspection/NDE or material assessments. The operational limitations include the required time for data collection, maintenance of traffic (MOT), safety, data processing, and reporting. To overcome the challenges in assessing bridge performance, there have recently been numerous investigations into merging the fast-growing cyber-physical structural health monitoring (SHM) technologies, advanced visual inspection, as well as NDE techniques into infrastructure monitoring. This includes the development of various techniques to facilitate the data collection, including 1) development of automated data collection and data processing systems to expedite the field and office operations, 2) development of fast-scanning systems that eliminate the lane closure and collect data at full high-way speed using instrumented vehicles, and 3) development of unmanned aerial vehicles to collect bridge data from a https://gearjunkie.com/travel/cotopaxiallpa-42-liter-travel-pack-carry-on-review distance. These unmanned aerial vehicle (UAVs) are usually equipped with some NDE non-contact instruments (such as Infrared Thermography (IR)) to gather information about the surface and subsurface of bridge elements. Despite the obvious advantages of these techniques to facilitate data collection, there is a huge gap in the
establishment of effective approaches to fuse the data acquired from all these paradigms to make informed decisions related to assessment, management, preservation, and renewal. In essence, currently, there is not a comprehensive framework to integrate the results taken from NDE/SHM/visual inspection, and they are mostly conducted/analyzed in a fragmented and piecemeal manner. 1.2 Problem Statement Bridge condition evaluation has become a crucial reference for any decisions on repair or prevention. To that extent, long-term research programs at the Federal level, such as Strategic Highway Research Program 2 (SHRP2)or Long-Term Bridge Performance (LTBP), aim to utilize a variety of advanced visual inspection, NDE, and SHM technologies for bridge condition assessment. The main focus of these programs is on the condition of the bridge deck due to its vulnerability to deterioration. The long time scales over which deterioration occurs directly challenge the relevance of the data obtained using these emerging techniques and prevent the ability to address new systems and materials as they emerge. In the case of material-level tests, although they can be carried out in a relatively rapid manner (for several months) they fail to address the multiple and compounding factors that bridges experience in operation (inclusive of live load, environmental, winter maintenance, etc.). Further, due to the reduced scale of these specimens, there are many open questions about the extent to which these tests simulate the actual deterioration mechanisms as they occur within an integral bridge system. Another limitation of the current bridge deck assessment practice is the lack of a robust data fusion framework. Currently, the data analysis from various sources during the inspection, or indepth field assessment processes, is conducted in a fragmented and piecemeal manner. In other words, each technique is individually used to evaluate the condition of the bridge rather than a complete integrated assessment. Thus, there is a huge gap in the establishment of effective approaches to fuse the data acquired from all these paradigms to make informed decisions related to assessment, management, preservation, and renewal. The primary purpose of this study is to establish a framework capable of leveraging emerging bridge assessment techniques to provide improved performance assessment of bridges. These assessment techniques include advanced visual inspection using UAV-equipped techniques and NDE. In particular, the proposed framework focuses on addressing the principal challenges associated with studying the service life of bridge structures, which are related to (a) the longtime scales (which requires accelerated aging), and (b) the diverse outputs related to bridge condition (in terms of data collected through NDE, and advanced visual inspection). The primary focus is to identify the synergies between bridge degradation, remaining service life, and the results taken from the multimodal sensing technologies (such as NDE, UAVs). 1.3 Research Plan To tackle the issues listed above, we propose to establish a framework capable of leveraging emerging advanced visual inspection and NDE techniques to provide improved performance
assessment of bridge decks. In particular, the proposed framework will be designed to achieve a more efficient and accurate inspection strategy. Through a collaboration with the Center for Advanced Infrastructure and Transportation (CAIT) at Rutgers University along with Wiss, Janney, Elstner (WJE) Inc. (industry partner), this research has leveraged access to the unique dataset generated by the Bridge Evaluation and Accelerated Structural Testing (BEAST) facility to identify the potential synergies betweenNDE and visual inspection to improve the current bridge condition assessment practice. As will be discussed in the next section, the BEAST facility is mainly designed to evaluate the bridge deck and superstructure. Given the vulnerability of the concrete bridge deck to deterioration, the focus of this study has been places on the concrete decks. As shown in Figure 1, in the proposed framework, bridge decks can be inspected at two levels, as follows: 1) Preliminary Fast Survey: In this stage, a fast-scanning survey of a bridge population (e.g., bridges over a certain interstate highway between two stations) can be conducted by vision-based evaluation. The primary objective here is to a) cover the entire network by conducting non-contact vision-based techniques, b) conduct the fastest possible technique for cost- and time-saving data collection, and c) avoid lane-closure and safety concerns. The ultimate goal is to survey a few bridges every day within a network of bridges and distinguish some of the bridges for further in-depth assessment. To meet the requirement of objective (a) above, two main techniques are already available. One is the vision-based technique, mainly deploying UAV systems equipped with high-definition (HD), infrared (IR), or light detection and ranging (LiDAR) cameras alone or in combination. The other technique relies mainly on non-contact ground penetrating radar (GPR) systems loaded on trucks, passing the bridge at full highway speeds. Due to the scope of this project, we will concentrate on the application of visionbased techniques. For this stage, high-definition and infrared cameras can be mounted on UAVs to collect high-resolution image data. HD images are used to detect surface defects, and IR images are used to defect subsurface anomalies. In Pennsylvania, the condition rating code used for concrete bridge deck is PENNDOT Pub #100A [2]. Based on the detected surface and subsurface defects, a condition index will be determined based on Table 1 [2]. If the rating belongs to category #3, the condition is defined as “good” with light deterioration, and there is no need to conduct further inspections. Otherwise, inspections need to be moved to secondary in-depth scanning using various NDE technologies. 2) In-depth Deck Survey: In this stage, various NDE techniques can be applied to the bridge deck. The data collected from multiple sources will be processed individually using statistical methods based on [9]. Then, results from each NDE technique will be fused to develop a better understanding of bridge conditions. The vision-based results and fused NDE results can be combined to make a final condition determination of the bridge based on Table 1 [2]. If the rating belongs to category #2, the condition is defined as “fair” with moderate deterioration. If the rating belongs to category #1, the condition is defined as
“poor” with extensive deterioration. If the condition is even worse, the condition is defined as “serious”. For the primary scanning stage, we propose using UAVs for data collection, which can significantly reduce the costs of time and labor. To achieve the goal of fast and objective data interpretation, machine learning-based algorithms will be applied to analyze collected data. For the secondary scanning stage, data collected from each NDE technique will be analyzed by using statistical methods. The correlated NDE results will be fused by statistic-based algorithms to provide a more reasonable and reliable result. Then vision-based results will be used to assist with index assignment based on fused NDE results. As previously discussed, the in-depth assessment of a bridge using any type of NDE/SHM can be very expensive, time-consuming, and operationally limited. Therefore, it is highly desirable to only select the bridges that require an in-depth assessment. To do so, the collected data from the BEAST facility will be investigated to develop efficient criteria as the decision-making measurements to opt certain bridges from Fast or In-depth Survey assessment.
Figure 1. Proposed two-stage inspection framework for bridge decks Table 1. Rating Code for Concrete Bridge Deck Evaluation [2]
Category Classification Category #3 Light Deterioration Category #2
Condition Indicators Deck Area Chloride Visible Electrical Deck Content Rating Spalls Delamination Potential Area (#/CY) 9 none none 0 none 0 8 none none 0.0<E.P.<0.35 none 0<C.C.<1 7 none <2% 0.35<E.P.<0.45 ≤ 5% 1<C.C.<2 6 <2% spalls or sum of all deterioration and/or contaminated deck
Deck Area none none none
Moderate Deterioration 5 Category #1 Extensive Deterioration Structurally Inadequate Deck
4 3 2 1 0
concrete (≥2 #/C.Y.CI) <20% <5% spalls or sum of all deterioration and/or contaminated deck concrete 20% to 40% >5% spalls or sum of all deterioration and/or contaminated deck concrete 40% to 60% >5% spalls or sum of all deterioration and/or contaminated deck concrete >60% Deck structural capacity grossly inadequate Deck has failed completely- Repairable by replacement only Holes in deck -Danger or other sections of deck failing
Notes: Rating 9-No deck cracking exists. Rating 8-Some minor deck cracking is evident This report consists of eight sections. In Section 2, an introduction to the BEAST and database will be provided. In Section 3, the development of the proposed inspection framework will be described. In Section 4, the strategy of UAV-based imagery data collection will be investigated. In Section 5, the dissemination, integration, and interpretation of BEAST data will be provided. In Section 6, structural inspections of BEAST will be presented. In Section 7, the cost-benefit analysis of the proposed framework and UAV-based data collection will be conducted. Finally, in Section 8, the conclusion and future work will be discussed. 2. Introduction of Bridge Evaluation and Accelerated Structural Testing (BEAST) BEAST is the first full-scale bridge testing facility located in Piscataway, New Jersey (Figure 2). The BEAST facility is built and commissioned by the Center for Advanced Infrastructure and Transportation at Rutgers, The State University of New Jersey. The BEAST facility is the first facility nationwide capable of applying controlled and accelerated live load, environmental, and maintenance demands on full-scale bridge superstructures. The specimen is a multi-girder steel composite bridge (30 by 50 ft) with an 8-inch bare concrete deck and black rebar reinforcement. It will be subjected to rapid-cycling environmental changes and extreme traffic loading to speed up deterioration, as much as 30 times, in order to simulate 15-20 years of wear-and-tear in just a few months. The deck is supported by four I-beams as the main girders with one fixed and one open joint. At its initial design, the specimen is supposed to be exposed to over 8 million cycles of live loading (60 kips), 400 freeze-thaw and hot-dry cycles, as well as the application of deicing agents (6% brine solution) to simulate common winter maintenance practices. The primary target is to validate performance models by measuring stresses and deterioration caused by live, environmental, and maintenance loading in an extremely compressed time frame. To that end, the BEAST experiment aims to utilize accelerated testing of the full-scale bridge deck and superstructure systems subjected to cyclic moving wheel loads and freeze-thaw environmental conditions. It is primarily envisioned that BEAST experiments will complement both field observations and material-level tests and fill an important gap in our current understanding of bridge performance and deterioration.
During the accelerated aging process, vision-based data and NDE data have been collected periodically. For vision-based data, hand-held HD cameras, stationary IR cameras, and dronemounted HD and IR cameras are used to collect image data. For NDE data collection, electrical resistance (ER), half-cell potential (HCP), ground penetrating radar (GPR), ultrasonic surface wave (USW), and impact echo (IE) are deployed.
Figure 2. The overview of the BEAST 3. Proposed Two-Stage Bridge Inspection Framework 3.1 Data-Driven Vision-Based Bridge Inspection For the first stage of inspection, we propose to use vision-based fast scanning. In this stage, UAVs mounted with HD and IR cameras are deployed to collect data. Vision-based methods are applied to the collected HD and IR images to detect surface and subsurface defects. 3.1.1 Surface Defects Detection In recent years, vison-based methods are gaining more attention to detect civil infrastructure damage [3] [4]. A number of studies have been conducted to detect superficial defects such as cracks and corrosion. For example, segmentation [5], filtering [6] [7]and stereovision-based methods [8]have been used to detect cracks and crack-like features in structural systems. The vison-based methods typically follow two steps to detect cracks [9]. In the first step, the image is filtered using a statistic filter, and crack features are locally extracted to fuse the image. The second step is associated with cleaning and linking the image segments to define the crack. Shadow removal algorithms are also developed to remove the shadows from the images and
pinpoint the crack [10] [11]. However, bridge inspection data are collected in various situations, and therefore, they vary extensively. Issues such as noises caused by the lighting condition and distortion, dependence on prior knowledge, and qualities of images are still challenging for reliable crack detection. A viable solution to cope with these issues is to deploy machine learning (ML) methods. The ML-based methods have been widely used in the SHM and NDE areas [12] [13] [14]. These approaches are generally used to interpret the signal data collected from the testing systems, and as a result, they are typically used to provide useful information about the condition of the structural systems. More recent efforts in this area are focused on integrating image feature extraction and machine learning techniques to develop novel SHM and NDE systems [15] [16]. However, using over-extracted or false-extracted features often causes significant complexity in the model development and underlies the accuracy. To tackle these issues, deep learning-based methods are used in this study. Two types of surface damage detection models are developed using a convolutional neural network with a Long Short Term Memory (LSTM) feature hybrid layer. One model is the single class crack detection classifier. Another is multi-class classifier for crack and spall detections. Details regarding developed classifiers are provided in the following sections. 3.1.1.1 Single-Class Classifier for Crack Detection on Smooth Concrete Surfaces Figure 3 presents the framework of the proposed method for the real-time detection of concrete cracks on bridges. As shown in this figure, the first step is to create a database that includes thousands of images of cracked and non-cracked concrete bridge decks. The database is then divided into training, validation, and testing subsets. The training and validation datasets are passed to the preprocessing stage, where the images are transformed into the frequency domain. Arguably, the edge shape of the surface cracks corresponds to high frequencies. Therefore, a high-pass filter (HPF) is applied to filter out the low frequencies corresponding to the background. After filtering, the image frequency matrices are flattened into vector frequency signals. These vectors are used to train the proposed 1D-CNN-LSTM algorithm. The developed method is applied to test images by conducting a sliding window through the whole image. The local window with cracks is kept in the output image.
Figure 3. The framework of the proposed method. FFT: fast Fourier transform The 1D-CNN-LSTM models are developed using a database containing 4800 images of manually labeled cracked and non-cracked concrete bridge decks [17]. The database includes cracks as narrow as 0.06 mm and as wide as 25 mm. The size of the images is 256 × 256 pixels. In order to improve the detection accuracy, images are broken into sub-images with 64 × 64 pixels. Out of the 4800 available images, 4300 images are cropped into 17200 small images. Images that are blurry or that include corner cracks are eliminated. Finally, 16789 images are kept as the dataset for this study. The remaining 500 bridge deck images are randomly stitched into 20 images sized 1280 × 1280 pixels for testing the generalization capacity of the developed classifier. In many studies, images are processed in the spatial domain. That is to say, images are processed as they are, without further preprocessing. In the spatial domain, the values of the pixels change with respect to the scene, and image processing is based on the pixel values. An arguably more efficient approach to process images is to transform them into the frequency domain [18]. Converting images into the frequency domain can significantly speed up the CNN training. The optimal 1D-CNN-LSTM architecture is selected via an extensive trial-and-error approach and developed using TensorFlow modules [19]. The ratio of the crack to non-crack images in the database is 1:2. The preprocessed database of 16789 manually labeled images is randomly divided into the training, validation, and testing sets with respective percentages of 70%, 15%, and 15%. Figure 4 shows the loss and accuracy of the training and validation procedure. The proposed method yields high accuracy on the training and validation data. The maximum training and validation accuracy are, respectively, 99.05% and 98.9%. The testing accuracy is 99.25%. The best model is saved to be applied to the unseen testing dataset. The simulations are conducted on a desktop computer with CPU: Intel® Xeon® CPU E5-1650 v4 @ 3.60 GHz, GPU: NVIDIA Quadro K420, and RAM: 31.9 GB. The total training time is 1 h 12 min 4 s.
An implementation running code is developed in python 3.7 for concrete bridge crack detection. Large-scale images are broken into small image groups with a size of 64 × 64 pixels. Each image group is transformed into the frequency domain. The developed model is then applied as a local window sliding through each image group to classify small images in the group as crack or noncrack images. In order to obtain more continuous cracks, an overlapped local sliding window is applied. Figure 5 provides a brief description of the sliding process. As shown in Figure 6, the detected cracks are more continuous after using the overlapped sliding window. Small images with cracks are kept as the output and restored to their original location in the large-scale image. All other small images without cracks are eliminated. Figure 7 shows the crack-detection implementation framework. As discussed before, 500 images sized 256 × 256 pixels are stitched into 20 large images sized 1280 × 1280 pixels to test the generalization of the trained model. For each image, the time for output generation is merely about 5–7 s. Figure 8 presents the crackdetection results for two of the tested images. The implementation accuracy is calculated as follows: ACC =
TP + TN 100% TP + TN + FP + FN
ER = 100% − ACC
(1) (2)
where ACC and ER are the accuracy percentage and error rate, respectively. TP, TN, FP, and FN are the number of true positives, true negatives, false positives, and false negatives, respectively. Referring to Fig. 13, the implementation accuracies are 98.5% and 97.75%; accordingly, the error rates are 1.5% and 2.25%.
Figure 4. (a) Loss and (b) accuracy of the developed 1D-CNN-LSTM model
Figure 5. A schematic representation of the overlapped sliding window.
Figure 6. Detected cracks using the overlapped and non-overlapped windows.
Figure 7. The 1D-CNN-LSTM implementation process for concrete crack detection.
Figure 8. Crack-detection results for two of the tested images One of the significant advantages of the proposed method over other studied CNN algorithms is its significantly faster training and implementation time. This issue is critical for real-time concrete crack detection, especially for autonomous bridge inspection using unmanned aerial vehicles. Details on methodologies can be found in our publication [20] and in Appendix I. 3.1.1.2 Multi-Class Classifier for Crack and Spall Detection on Rough Concrete Surfaces The method proposed in the previous section has significant advantages in computation cost and accuracy. However, while this method works well for smooth concrete surfaces, it is not suitable for rough or textured concrete surfaces. In real-world situations, bridge decks usually have rough or textured surfaces. In addition, spall is another critical type of surface damage that needs to be evaluated. Thus, a multi-class classifier needs to be developed for crack and spall detection on rough concrete surfaces. In this section, a hybrid vison-based crack and spall multi-class detector that uses a 2D convolutional neural network (CNN) combined with long short term memory (LSTM) will be presented.
To address the advantages of the proposed method for detection of defects on rough concrete surfaces, images from a rough concrete pavement located in Pittsburgh are collected for this study. 156 images with size 4600 × 3400 pixels are collected and broken into small images with size 256 × 256 pixels. Small images are manually classified to be one of 3 classes: intact, crack, spall. The model training results are highly dependent on image quality. Thus, images are classified according to the following rules: 1). Only images with more than 50% spalling area are classified as spall class; 2). Images with corner cracks are eliminated; 3). Blurry images are eliminated. Figure 9 shows examples of intact, crack and spall classes. Finally, 7200 images are selected to be used in this study, including 2400 intact images, 2400 crack images and 2400 spall images.
Figure 9. Examples of intact, crack and spall images The dataset is randomly separated into training (75%), validation (15%) and testing (15%) sets. Figure 10 provides the loss and accuracy training and validation procedure. The maximum accuracy of training and validation are 99.7% and 98.64% respectively. The best model is saved to be applied to unseen testing sets, the testing accuracy is 97.5%.
Figure 10. Training and validation performance To further quantify the crack length within each crack region detected in section 2, an algorithm has been developed. Figure 11 shows the flowchart of the developed algorithm. First, non-local
means denoising [21] is applied to the crack region to conduct initial denoising. Then binary thresholding is applied. The optimal thresholding values are automatically selected based on Singh’s study [22]. After binary thresholding, morphology erosion, dilation and thinning are applied accordingly to obtain the skeleton of cracks. However, the skeleton is not continuous. Since the crack region is small, the assumption of continuity of cracks is made. Modifiers are made based on the nearest neighbor method [23] to conduct one-direction nearest neighbor search from the top-left conner to the right-bottom corner. All points will be connected to its nearest neighbor to obtain the complete skeleton of cracks. Finally, the total pixel length of the skeleton will be the crack length.
Figure 11. Crack length quantification algorithm As shown in Figure 12, the well-trained 2D CNN-LSTM model will be applied to input images as a sliding window and classify each window to be intact, crack or spall. Then the crack windows are passed through a crack quantification algorithm to determine the total crack length in each window. Using a desktop computer with CPU: Intel® Xeon® CPU E5-1650 v4 @ 3.60GHz, GPU: NVIDIA Quadro K420, RAM: 31.9GB, the crack and spalling region detection speed is about 30 seconds per image with size 3840 × 5632 pixels. The crack length quantification takes about 50 seconds, which might change depending on the number of crack regions. Figure 13 shows an example output using the proposed method. Detailed methodologies are found in our publication [24] or Appendix II.
Figure 12. The framework of the proposed method
(a) Input Image
(b). Output Image Figure 13. Output example of proposed work 3.1.2 Subsurface Defects Detection Subsurface defects typically result in corrosion-induced deterioration of reinforcement. Its continuous development can cause the loss of structural integrity [25]. Therefore, delamination is one of the most critical defects assessed during the bridge deck inspections. Traditional delamination detection methods, such as hammer sounding and chain dragging, are laborintensive, time consuming, and require maintenance and protection of traffic (MPT). Infrared Thermography (IR) has been developed to detect delamination. It provides fast and effective inspections with reasonable accuracy [26] [27] [28]. The internal delamination acts like an insulating layer that causes different heating and cooling rates from the surrounding concrete. The thermal contrast in infrared images (IR) can be identified as delamination [29]. This section proposed a semantic image segmentation using a novel deep learning architecture to detect the location of subsurface defects and quantify the defected areas from IR images. The proposed method associates each pixel in infrared images with a class label (defected or sound), in which defected areas are segmented from the sound concrete areas and thus can be localized and calculated. To achieve this goal, a pixel-level separation is required. Semantic pixel-wise segmentation is a state-of-art technique that has been used in many studies [30] [31] for image segmentation. Since the complexity of infrared images is simple, semantic pixel-wise segmentation is promising to result in acceptable outcomes. Deep learning network structures Xception backboned DeepLabV3+ is selected to be used in this study. In the following section, details of the proposed method are provided.
To test the proposed method, data collected by McLaughlin et al. [32] and data collected from BEAST are used in this study. Unmanned ground and arial vehicles with thermal cameras were used to collect 700 infrared images (size 512 x 640 pixels) from four reinforced concrete bridges. The dataset contains 361 images with subsurface defects and 339 without subsurface defects. Images are resized to 256 x 256 pixels to train the network, and a data augmentation technique is used to generate more data. Figure 14 shows examples of the images. Since the proposed method detects subsurface defects at a pixel level, each training image is labeled at the pixel level as one of two classes: defected or sound with expertise advising. A pixel-wise image labeling toolbox in MATLAB [33] was used to label the images. The labeling process was supervised by experts as shown in Figure 15.
Figure 14. Top row: images with subsurface defects, bottom: images without subsurface defects
Figure 15. Pixel-wise Labeling for Data The network is developed in python 3.7 with TensorFlow modules [34]. The datasets are split into training (75%), validation (15%), and testing (15%) datasets. The results are evaluated by loss, accuracy and mean intersection over union (mIoU). IoU is a commonly used metric to evaluate segmentation accuracy and is defined by the following equation: TP
IoU = TP+FP+FN
(3)
Where TP represents true positives, FP and FN are false positives and false negatives. Loss, accuracy and mIoU are calculated and tracked for each epoch for training and validation. Figure 16 shows the training and validation metrics. As shown in Figure 16(d), the best accuracy and mIoU for training are 99.36% and 0.98. The best accuracy and mIoU for validation are 97.96% and 0.96, respectively. The best model is saved to be applied to the unseen testing dataset. The test accuracy and mIoU are 97.83% and 0.95. Figure 17 shows examples of detected subsurface defect areas in the image. Images in the top row are input IR images with defected areas. Images in the bottom row are the output from the proposed network, where white pixels are detected defected areas.
Figure 16. (a). Loss value, (b). Accuracy, (c). Mean intersection over union, (d). The best accuracy and mIoU achieved for training, validation and testing
Figure 17. Top: input IR images, Bottom: detected and quantified areas with subsurface defects (white pixels) To further implement the developed model, a sliding window can be applied to large scale IR images. The delaminated areas in each local window can be detected and the total delaminated area in the large image can be calculated. As shown in Figure 19, small images are stitched together into a large image with size 1536 x 2560 pixels. The trained model is applied by using a sliding window technique to scan through the large image. The output image shows the detected and quantified delamination areas. Details of methodology are found in our publication [35] or Appendix III.
Figure 19. Implementation of the Proposed Method
3.1.3 Software for Automated Vision Based Bridge Inspection For future practical use of the proposed method, software interface is created using a Tkinter GUI package [36] in Python 3.7. A screenshot of the developed software is shown in Figure 20a.The developed GUI has two sub windows, one for surface defects detection (crack and spall) and another for subsurface detection (delamination). As seen in Figure 20b, the input section of this software program for crack and spall evaluation requires inputting of the directory of the well-trained deep learning model, test images and desired output location. The break size is the size of images used in the model training procedure. By clicking the run button, testing images are loaded one by one, and the model is applied as a sliding window to classify each local window. The output image with only crack, spall local windows, and crack length are saved to the output directory. Figure 20c presents the window of subsurface defects detection system, in which the input section requires the well-trained model, the directory of test images, and output folder. The break size denotes the sliding window size, namely the size of your training images. After clicking the run button, test images will be loaded into the software and the model is applied by sliding window. The delamination area in each local window will be detected and quantified. The output image with detected delamination areas will be automatically saved to the output directory This research is sponsored by the IRISE public/private research consortium which is financially supported by PennDOT. One of the main parameters in the current PennDOT rating system for concrete bridge decks is the spalling density. The crack density is also most useful for the prediction of useful life of bridge decks. Therefore, the region-based densities of the cracked and spalled areas, as well as crack length density on the surface of the concrete bridge deck, are calculated using the following equations: 𝐶𝑟𝑎𝑐𝑘 𝑅𝑒𝑔𝑖𝑜𝑛 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 (%) =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑙𝑜𝑐𝑎𝑙 𝑤𝑖𝑛𝑑𝑜𝑤 𝑤𝑖𝑡ℎ 𝑐𝑟𝑎𝑐𝑘
∗ 100
(4)
∗ 100
(5)
𝐶𝑟𝑎𝑐𝑘 𝐿𝑒𝑛𝑔𝑡ℎ 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 (%) = 𝑇𝑜𝑡𝑎𝑙 𝐼𝑛𝑠𝑝𝑒𝑐𝑡𝑖𝑜𝑛 𝐴𝑟𝑒𝑎 ∗ 100
(6)
𝑆𝑝𝑎𝑙𝑙 𝑅𝑒𝑔𝑖𝑜𝑛 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 (%) =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑙𝑜𝑐𝑎𝑙 𝑤𝑖𝑛𝑑𝑜𝑤𝑠 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑙𝑜𝑐𝑎𝑙 𝑤𝑖𝑛𝑑𝑜𝑤 𝑤𝑖𝑡ℎ 𝑠𝑝𝑎𝑙𝑙 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑙𝑜𝑐𝑎𝑙 𝑤𝑖𝑛𝑑𝑜𝑤𝑠 𝑇𝑜𝑡𝑎𝑙 𝐶𝑟𝑎𝑐𝑘 𝐿𝑒𝑛𝑔𝑡ℎ
Subsurface defects density is calculated as: 𝑆𝑢𝑏𝑠𝑢𝑟𝑓𝑎𝑐𝑒 𝐷𝑒𝑓𝑒𝑐𝑡𝑠 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 (%) =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑒𝑡𝑒𝑐𝑡𝑒𝑑 𝑝𝑖𝑥𝑒𝑙𝑠 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠
∗ 100 (7)
(a) Developed GUI homepage
(b) Surface defects detection window
(c) Subsurface defects detection window Figure 20. The developed software for automated detection and quantification of cracks and spalls in concrete bridge decks The developed GUI has been tested on the HD and IR images collected from BEAST. Figure 21a is the drone collected HD image from the whole bridge deck surface of BEAST. This image is fed into the surface defects detection system described above. The output is shown in Figure 21b. The detected spall region density and crack region density are 16.8287 % and 0.2637 % respectively. Figure 21c is the drone collected IR image from the whole bridge deck surface of BEAST. This image is fed into the subsurface defects detection system above, and the output is shown in Figure 21d. The calculated subsurface defects density is 16.9323%.
(a) HD image of BEAST bridge deck
(b) Detected surface defects of BEAST bridge deck by developed GUI
(c) IR image of BEAST bridge deck
(d) Detected subsurface defects of BEAST bridge deck by developed GUI Figure 21. Evaluate the developed GUI by using BEAST data
3.1.4 Discussion and Future Work of Automated Vision-based Inspection In section 3.1, the methodologies for “Primary Scanning” have been introduced. The GUI for automated vision-based inspection has been developed and tested on BESAT. Here, we will discuss the challenges and future plans for using the primary scanning system. One of the challenges in the developed vision-based inspection is correlating the image pixels to real-world dimensions and selecting the proper sliding window size. According to camera theory, the ratio between the image pixels and true area needs to be obtained to convert pixel-based areas to an engineering unit. It is common to use the physical target as a marker as well as a sort of calibrating standard in vision-based methods. By measuring the distance between two preselected points on the target in the image coordinate and the world coordinate, a conversion ratio between pixels and engineering units can easily be determined. However, it could be difficult to measure the distance from real-world targets due to the access issue. In this case, the camera theory is utilized as an alternative approach by establishing the conversion ratio R and the distance Z from the camera to measurement locations in terms of camera calibration. As shown in Figure 22, pixel values d can be calculated by equation (8). d=h/p (pixel)
(8)
Where h is the object image dimension in the engineering unit, p is the unit length of camera sensor (millimeter per pixel), which is provided by the camera manufacturer. According to the triangle similarity, equation (9) is developed: h/D=f/Z (9) Where f is the focal length of camera and Z is the distance from the camera to the object. Substituting equation (8) into (9), conversion ratio R is demonstrated as an inversely proportional function to Z value. R=d/D=f/(p*Z) (10)
(pixel/mm)
Figure 22. Camera Theory As a result, the distance from the camera to the object is important to determine the conversion ratio R. In addition, the sliding window size should be selected based on the conversion ratio R. In the future, more research needs to be done to develop a working prototype for image collection and the corresponding sliding window sizes. In addition, the current models are trained with limited data. More data needs to be collected from different bridges to train the models and ensure the models are general enough for all bridges. 3.2 Nondestructive Evaluation and Multi-sensor Data Fusion Various NDE techniques have been used to assess bridge deck conditions, including impact echo (IE) [37], ground penetration radar (GPR) [38], half-cell potential (HCE) [39], electrical resistance(ER) [40], ultrasonic surface waves (USW), Infrared thermography (IR) [41], digital images [42], etc. Most existing research focuses on evaluating the bridge deck condition using different NDE methods individually. However, some NDE techniques are sensitive to environmental factors, such as humidity, temperature, and surface condition [43]. As a result, individual NDE inspection results might be not reliable. To tackle this concern, researchers assessed bridge deck conditions using multiple NDE techniques. Using multiple NDE techniques results in massive amount of data. Because of the large amounts of data, multi-sensor data interpretation and fusion have become new challenges [44]. In this section, data fusion for multiresource NDE data will be discussed. First, statistical analysis for individual NDE results will be introduced. Then a novel multi-resource data fusion method for NDE data fusion will be described. 3.2.1 BEAST Data Collection During the entire accelerated aging process of BEAST, extensive high-resolution NDE data has been collected to investigate the paramount factors influencing the bridge deck performance. Despite the obvious advantages of high-resolution NDE data, the variance of collected data
remains a huge contributor to the variance of performance predictions from the BEAST. Therefore, the key question is to make sure the collected data are consistent, accurate, and reliable. The following NDE techniques were specifically used in order to periodically collect data from the BEAST deck, including: ▪ ▪ ▪ ▪ ▪ ▪ ▪
Impact Echo (IE) Ultra-Surface Wave (USW) Ground Penetrating Radar (GPR) Electrical Resistivity (ER) Half Cell Potential (HCP) HD Surface Images Infrared Thermography (IR)
Table 2 summarizes the list of testing configurations, as well as the NDE data collection from the BEAST during two years of data collection from November 2019 until November 2021. During this period of time, the BEAST deck was comprised of Ordinary Portland Cement (OPC) Class A, normal strength concrete (NSC) with a design f’c = 4000 psi, 8 inch thickness, ¾” of coarse aggregates, 6% air entrained, no pozzolan/superplasticizer, and nominal 2 inch top cover. The bridge deck reinforcement was uncoated black bar. The deck was built on top of four steel girders (fixed, open joint), each covered in a different type of coating, and has created a composite section by having two shear studs per cross-section. The live load on the deck was simulated using a two-axle frame rolling back and forth with a total weight of 60 kips. Table 2. BEAST Testing and Data Collection Configuration Data Cumulative Cumulative Collection Live Load FreezeDate Cycles thaw Cycles 11/2019 01/2020 02/2020 06/2020 11/2020 12/2020 03/2021 04/2021 06/2021 07/2021
185000 385000 572000 717000 914000 1114000 1323270 1374876 1671506 1866006
8 24 35 39 48 56 70 73 85 85
10/2021
2000000
NA
Deck Condition Rating (Visual Inspection) 6 5 4 (unofficial)
NDE Data Collection `(IE/ USW/ER/GPR/HCP)
Drone-based Data Collection HD Image
IR
× × × × × × ×
× × × × × × × ×
pending
×
×
3.2.2 BEAST Data Processing NDE outputs needed to be processed to meet minimum quality standards. To accomplish this objective, each set of NDE collected data was statistically studied to investigate the outliers and any anomalies in the data. A data assessment approach similar to that proposed by Babanajad and Jalinoos was adopted here to prepare the data for further analysis [45] [46]. To do so, the frequency histograms for each individual dataset were developed and the important statistical features of each set including minimum, maximum, mean, and standard deviation were calculated. Depending on the NDE technique and the level of observed anomalies and outliers, some of the data was eliminated from the datasets. For example, if the USW dataset included any elastic modulus less than 500 ksi or more than 8000 ksi, such data was removed accordingly. Similar thresholds were established and the NDE datasets were prepared. Upon completion of data cleaning, , all the data shown within the fitted curve were retrieved and the data points within the set boundaries were selected as the final dataset for plotting purposes. Once accomplished, the data was pushed into the Dplot software to develop the color counter maps. The color scale for all NDE tests followed a singular combination of colors, ranging from the minimum of the dataset up to the maximum of a given dataset. Furthermore, additional defect maps, developed by Babanajad and Jalinoos, were adopted to create the defect counter maps. As discussed by Babanajad and Jalinoos, the primary intention of such maps is to be employed for informed decision of preservation, maintenance, or major rehabilitation. To generate the defect maps, the last quartile of each dataset was flagged to be defined as defect (unhealthy) and the remaining will be considered healthy. Figure 23 shows the defect maps of ER data set collected in November 2019. Based on the above statistical analysis, 10 rounds of NDE data collected from various life stages of BEAST have been processed, and defect maps haven plotted.
Figure 23. Colored contour map and damage map for Electrical Resistance (ER) (based on technique proposed by Babanajad and Jalinoos) 3.2.2 Multi-sensor Data Fusion for NDE Results NDE techniques are sensitive to environmental factors, such as humidity, temperature, and surface condition [43]. As a result, individual NDE inspections results might be not reliable. To tackle this concern, researchers investigated in bridge deck condition assessment using multiple NDE techniques. The observations from multiple techniques provide better estimation than a
single technique [47]. Using multiple NDE techniques results in massive amount of data, and multi-sensor data interpretation and fusion have become new challenges [44]. In this study, a multi-sensor NDE data fusion method, based on discrete wavelet transform (DWT) and Dempster-Shafer (D-S) evidence theory, is proposed for bridge deck condition assessments. In the proposed method, deterioration maps of various NDE techniques will be decomposed by using DWT. The extracted features will be fused by applying different fusion rules, and then the fused features will be reconstructed to new fused deterioration maps. Fused deterioration maps generated by various fusion rules are different. Each fused deterioration map could be regarded as a candidate for the ideal result. However, there are both similarities and conflicts among the fused deterioration maps. It would be hard to trust any individual fused deterioration map. Alternatively, D-S evidence theory has been widely used to achieve more reasonable results from systems with uncertainty or conflicts [48] [49] [50] [51] [52]. D-S evidence theory has been applied to conduct further fusion of DWT fused deterioration maps. To evaluate the proposed method, NDE data collected from The BEAST is utilized in this study. NDE techniques used in this study include HCP, ER, GPR, USW. The color scales in the NDE condition maps represent different probability levels of defects. However, different NDE condition maps of the same bridge deck are different. There are similarities and conflictions among the NDE condition maps. As a result, fusion of multisource NDE condition maps is needed to provide more comprehensive results than from individual NDE maps. In this study, four condition maps from different NDE techniques, shown in Figure 24, will be fused, including ER, HCP, GPR, and USW. First, discrete wavelet transform will be applied to each map to decompose them into sub images. Then the sub images from each image will be fused using fusion rules. Since the colored image consists of three channels, namely RGB channels, the wavelet decomposition and fusion will be conducted in each channel separately. Four color series are used in the original condition maps, including blue, green, yellow and red. Each color series has its own color code, the color codes for blue, green, yellow and red are [0,0,1], [0,1,0], [1,1, 0] and [0,0,1] respectively. According to previous study, different fusion rules could result in very different fused maps [53]. During the wavelet fusion process, fusion rule combinations listed in Table 3 are applied to approximation sub images I𝐿𝐿 (x, y) and detail sub images I𝐿𝐻 (x, y), I𝐻𝐿 (x, y) and I𝐻𝐻 (x, y). Maximum operation keeps the maximum value at each pixel among sub images from different NDE condition maps. Similarly, minimum and mean operations keep the minimum values and average values at each pixel respectively. The “MaxMax” fusion rule results in keeping maximum values in all channels, the upper limitation of “MaxMax” color code is [1,1,1], namely white. The “MinMin” fusion rule results in keeping minimum values in all channels, the lower limitation of “MinMin” color code is [0,0,0], namely black. All other fusion rules result in values between “MaxMax” and “MinMin”. As a result, these nine different rule combinations cover the full range of possible fusion results. These fused maps have similarities and conflictions among them, so it is hard to determine the reliability of each fused map. To tackle this issue, D-S theory is applied in this study. Nine different fused condition maps are regarded as candidates. Then, D-S evidence theory is applied to these nine fused condition maps to obtain the final condition map. The final step is to detect the defected parts from the fused condition map. Red color indicates there is a high probability of defects, which is critical for bridge deck condition evaluation. Thus, red parts are segmented from the
fused condition map and regarded as the potential defected parts. Figure 25 shows the workflow of the proposed method.
Figure 24. Condition Contour Maps for Individual NDE Techniques
Figure 25. Proposed Framework for Multisource NDE Condition Map Fusion Table 3. Fusion Rules for Wavelet Fusion
To evaluate the proposed method, four types of NDE data shown in the previous section have been tested. Figure 26 shows the fused condition maps by using wavelet fusion with various fusion rules. As shown, these nine fused condition maps are significantly different. Roughly, all maps indicate that center parts of the bridge deck have higher probabilities of defects than other parts. However, the details of defects distributions and levels are different. Figure 26 shows the final fused condition maps after applying D-S theory to condition maps in Figure 27. As shown in the results, the parts with high probabilities of defects (red) are distinguished from other parts. Figure 28 shows the segmented parts with high probabilities of defects. The calculated percentage is 14.4%. Detailed methodology can be found in our publication [54] or Appendix IV.
Figure 26. Wavelet Fusion for Condition Maps with Various Rules
Figure 27. Final Fused Condition Map
Figure 28. Detected Potential Defected Parts 3.3 Comparison Between Vision Based and NDE Based Inspection Results For further verification of the Vision-based inspection, the collected IR images and HD images were compared with NDE results. In Figure 29, the IR image is compared with the HCP results, and the HD image is compared with the GPR detected concrete cover thickness maps. As seen, UAV results match well with the NDE results. In addition, as shown in Figure 30, the developed automated damage detection methods can detect the damaged areas with acceptable accuracies.
Figure 29. A comparison of the results obtained using the UAV-mounted IR and HD cameras and the HCP and GPR techniques
Figure 30. Results showing the automated deep learning-based detection and quantification of surface defects and subsurface defects on the BEAST specimen.
3.4 3D Visualization of Bridge Inspection Results To provide a better visualization of bridge inspection results, we proposed to develop the 3D model of BEAST to assist in the inspection process. In this study, aerial images were collected from BEAST by using a drone mounted with a high resolution HD camera. 3D reconstruction by photogrammetry was used to develop the 3D meshed model of BEAST. Figure 31 shows the developed 3D models of BEAST, and Figure 32 shows the IR condition map overlayed on the 3D model. Details of methodology can be found in Appendix V.
Figure 31. Completed Digital Surface Model of the BEAST Facility
Figure 32. 3D Model with IR images
4. UAV Data Collection Strategy In recent years, unmanned aerial vehicles (UAVs) have been deployed to perform automated inspections in various engineering areas [55] [56]. Several researchers have investigated the feasibility of mounting sensors on UAVs to conduct bridge inspections [57]. However, the accuracy of UAV-mounted sensors is affected by several factors such as ambient environment and surface conditions. In particular, the data collection speed, data collection time, and camera specifications can significantly affect the accuracy of IRT systems [58]. The other crucial aspect is to benchmark the performance of UAVs against standard NDE methods during the lifespan of bridges. To addresses these challenges, we study the performance of UAV-mounted IRT and HD cameras for bridge defect detection. 4.1 Data Collection Plan During the accelerated aging process, UAV-mounted HD and IR cameras were used to collect images from BEAST. Table 4 shows the specifications of the dispatched UAV and cameras. Figure 33 show the UAV setup for ariel data collection. Since the data quality of IR images can be significantly affected by several factors, a UAV data collection plan with various scenarios was developed (see Table 5). The detailed UAV data collection plans are attached in Appendix VI. In this study, we present, compare and discuss part of the data collected on April 28th (Early
Summer) and July 27th(Late Summer), 2021. During the testing, the IR and HD cameras were mounted on the UAV platform as shown in Figure 34. Data collections were conducted based on the plan listed in Table 5. As seen, data were collected in three rounds from morning to evening. According to Washer et al. [59], the necessary temperature change for IRT under passive conditions is at least 8.2°C. The temperature changes were 11.1°C at the closest climatological substation to the BEAST facility on April 28th, 2021. The temperature changes were 14.2°C at the closest climatological substation to the BEAST facility on July 27th, 2021.
Table 4. Specifications of the dispatched UAV and mounted cameras Drone Platform: DJI Matrice 600 Pro Camera Specification Resolution Focal Length FOV (mm) FLIR Vue Pro R 640 pixel 19 45° Zenmuse X5 w/ Olympus 16 Digital 45mm F1.8 lens Megapixel 45 72°
Frequency (Hz) 30 -
Table 5. Details of the proposed data collection plan Time Distance from Deck (feet)* Camera angles Overlap Deck condition
HD Images
Collection #1 Morning (10-12)
Collection #2 Collection #3 Afternoon (3-5 Evening pm) 30/40/50/60/70/80 30/40/50/60/70/80 30/40/50/60/70/80 Vertical / Oblique 75% Dry Wet (if possible)
Vertical / Oblique 75% Dry Wet (if possible)
Vertical / Oblique 75% Dry/Wet (if possible)
Same setting as IR image (only before sunset)
Figure 33. UAV setup for ariel data collection 4.2 Results and Discussion The IRT efficiency in detecting delamination is dependent on the solar radiation heating up the concrete surface. Delaminated parts within the concrete structure are filled by air. The air acts as a thermal insulator (thermal conductivity of air: 0.0241 W/m/°C) preventing heat from penetrating into the concrete (thermal conductivity of concrete is 1.6 W/m/°C) beneath the delaminated area. Consequently, the concrete surface above the delaminated area becomes warmer than the surrounding sound areas during the daytime. This thermal contrast can be used to identify the delaminated sections. However, it is not always possible to detect concrete delamination only from the color variation of the raw IR imagery. The main reason is that some images contain other objects that can change the temperature range of images. Thus, in this study, the collected IR images were color normalized based on the temperature range of the bridge deck. We discuss the effects of data collection time, UAV flying altitude, and data capturing angles on delamination detectability First, the data collected on both days will be discussed. Figure 33 shows the IR images collected from the bridge deck at different times using the same flight altitude and angles. As seen, the thermal contrast is low in the morning. The contrast becomes significant in the late afternoon (after 4pm). This reveals that bridge decks need time to heat up. As one would expect, afternoon would be a better time to collect the IR images during the summer. Figure 34 shows the IR images collected from different altitudes using the same time and angles. As shown, the thermal contrast does not change significantly with the flight altitudes within certain ranges. There are major benefits to flying a drone at a high altitude without losing the ability to detect damage. For instance, if the detectability does not change with the flight altitude, the drone can be deployed to capture data without causing driver distractions or blocking traffic. Figure 35 shows that the thermal contrast in the image collected from oblique angles decreases significantly compared to the images collected from vertical angles. As a result, it was found that dispatching UAV in late
afternoon with vertical angles and with adjustable altitude can results in a more reliable IR data collection. Details can be found in our publication [60].
(a) April 28th, 2021
(b) July 27th, 2021
Figure 33. Normalized IR images for the entire bridge deck captured at 70 ft above with vertical angle to the deck
(a) 4:00 pm on April 28th, 2021
(b) 5:30 pm on July 27th, 2021
Figure 34. Normalized IR images captured from different altitudes with vertical angle to the deck
(a) 45 feet altitude at 4:00 pm on April 28th, 2021
(b) 80 feet altitude at 5:30 pm on July 27th, 2021 Figure 35. Normalized IR images captured from different angles 5. Dissemination, Integration, and Interpretation of BEAST Data One of the primary objectives of BEAST experiment was to speed up bridge component deterioration as much as 30 times in order to simulate 15-20 years of wear-and-tear in just a few months. At BEAST, this was successfully conducted by applying controlled and accelerated live load, environmental, and maintenance demands on full-scale bridge superstructures. Several steps are required in order to capture appropriate information from the BEAST experiment in order to achieve the long-term performance of bridge components. To achieve such objective, the following sections will outline the step-by-step analysis of BEAST data. In the first step, the time scaling of the BEAST accelerated experiment is compared to deterioration at a real-world pace. In the second step, the average performance of a typical bridge (similar to BEAST) in Pennsylvania is calculated based on the analysis of historical data available at the InfoBridge [61]. The performance of this bridge (as the representative of PA bridges) is then refined based on the visual inspection of the BEAST specimen. In the third step, performance
indexes are developed to define the bridge component performance based on the variety of multiple NDE techniques. These indexes are then compared against the representative performance of the bridge and thresholds for each NDE technique are derived accordingly. Ultimately, the NDE data is fused using the proposed fusion methodology and then compared against the performance of the bridge. Final recommendations is made at the conclusion of this section based on the results observed from the analysis of NDE data. 5.1 Timing Scale of BEAST Experiment In order to better understand the performance of bridge components over long time‐scales it is essential to find matrices that scale the timing at the BEAST facility with the timing in which a typical bridge deteriorates in the real world. To do so, multiple factors can be used as the scaling system to normalize the accelerated timing of BEAST versus real world timing. Among all factors, truck traffic and environmental characteristics are likely the most important and influential external parameters that shall be considered during this scaling. There are other factors, such as internal characteristics of bridge components (including the speed of chemical reaction required for the corrosion of rebars or the chloride ion penetration), that could be used as the scaling system. Due to complexity of such scaling and the extensive time/effort required, this project only concentrated on the scaling based on external parameters. At the same time, it is worth mentioning that external parameters, such as the live and environmental loading, are the most influential factors in determining bridge deterioration. Truck Traffic Live-Load – in comparison with regular car traffic, truck traffic induces the most damage to the concrete decks. Therefore, the average daily truck traffic (ADTT) is scaled-up in the BEAST experiment based on the same level of traffic that a bridge is subjected to in the real world. To that extent, the ADTT information associated with all the bridges in the state of Pennsylvania was derived from InfoBridge. Figure 36 demonstrates the frequency histogram of ADTT recorded in 2020 for the bridges in the State of Pennsylvania. Given the wide distribution of recorded ADTT, the 50% percentile (median) of the recorded ADTT (equals to 137) is taken as the representative level of truck traffic that a regular highway bridge might be subjected to during most days of the year. By reviewing Table 2, the BEAST facility has already subjected to over 2 million cycles of live-load traffic. Therefore, it is possible to establish a correlation between the BEAST timing versus the real-world timing based on the live-load. Table 6 proposes such scaling based on the comparison of live-load demand on BEAST as well as a representative bridge with 137 trucks per day (ADTT). Figure 37 plots the conversion of BEAST live-load cycles into actual years that a bridge with ADTT equals to 137 would be typically exposed.
Figure 36. Frequency histogram of ADTT for bridges in Pennsylvania Table 6. Correlation between applied traffic cycles on BEAST and actual years BEAST Traffic Cycle* Actual Year ** 185000 3.7*** 385000 7.7 572000 11.4 717000 14.3 914000 18.3 1114000 22.3 1323270 26.5 1374876 27.5 1671506 33.4 1866006 37.3 2000000 40.0 * Reflects the timing in which BEAST was stopped and NDE data was collected ** According to the number of years calculated based on ADTT=137 *** Example: 185000/(365*137)=3.7 Yrs
Figure 37. Conversion of BEAST live-load cycles into actual years
Environmental Loading – Similar to the live load, the levels of freeze-thaw cycles were derived from the InfoBridge for the bridges in Pennsylvania. By reviewing Table 2 and comparing to the annual freeze-thaw cycles shown in Figure 38, the limited number of freeze-thaw cycles that the BEAST was exposed to (85 cycles) did not correlate with real world conditions.
Figure 38. Frequency histogram of Annual Freeze-thaw Cycles for bridges in Pennsylvania
5.2 Development of Performance Measure for A Representative PA Bridge Accurately predicting future performance of bridges is key to effective maintenance and rehabilitation decision making. One way to predict future performance of bridges and determine their remaining service life is to utilize deterioration models. In essence, relatively few indicators (such NBI Condition Rating and Elemental Condition States) have been adopted to determine the performance of bridge components. To that extent, the average performance of a typical bridge (similar to BEAST) in Pennsylvania is calculated based on the analysis of historical data available at the InfoBridge website (see Figure 39). To derive information for this representative bridge, the historical NBI deck condition ratings were analyzed for steel multi-girder bridges in PA (located in the Mid-Atlantic cluster) to mimic the structural characteristics of the BEAST specimen. As shown in Table 7, the average time-in-state (TIS) for each NBI condition rating (9 to 3) was separately estimated after proper levels of data cleaning. Two separate curves, one with the mean of TIS and one with the upper bound of TIS (mean + standard deviation) for each condition rating, were developed to define the region that the bridge performance will be most probably progressed in the perspective years. Figure 40 represents the performance of the deck component based on the deck condition rating. The two curves with light green color (one thick solid and one dashed) refer to the mean and upper predicted performance curves based on historical TIS data collected from PA bridges. At the same time, the figure represents the timing of BEAST data collection periods based on bridge age. The figure is instrumented with two vertical axes, where the left axis denotes the Deck Condition Rating and the right one is associated with the live-load traffic (ADTT). The dashed vertical lines refer to the scaled timing of data collection periods from BEAST. The skewed blue line depicts the increase of live load based on the data collection periods.
Figure 39. Distribution of multi-girder Steel Bridges in PA (color-coded based on deck condition)
Table 7. Average time-in-state (TIS) for each NBI condition rating Condition Rating 9 8 7 6 5 4 Total Life
TIS – Average 3.4 7.7 10.7 8.0 6.0 4.4
TIS - Standard Deviation 2.0 6.4 7.1 5.8 4.3 3.1
Performance Curve (TIS) Mean Upper Boundary 3.4 5.4 7.7 14.0 10.7 17.8 8.0 13.8 6.0 10.3 4.4 7.5 40 69
Figure 40. Derivation of refined curve for the representative bridge based on BEAST visual inspection Given the space between the mean and upper performance curves is relatively large, potential exists for very high uncertainty in the reliability and accuracy of predictions of remaining service life in the BEAST experiment. Therefore, the condition rating acquired from the visual inspection reports conducted on the BEAST specimen were later used to refine the performance curve for realistic assessment. It shall be noted that the skewed blue line associated with the ADTT and scaling the time of data collection from BEAST was used to determine the scaled timing of visual inspection periods. The three datapoints shown with black (+) signs depict the scaled deck condition ratings derived from two rounds of visual inspection of the BEAST specimen. Reviewing Figure 40 reveals that the BEAST specimen has overperformed the representative PA bridge (with similar structural characteristics), however, it underperformed the upper boundary of the performance curve. To some degree, this observation ensures the reliability of the performance assessment and scaling of the BEAST experiment. The final
performance curve (solid dark green line) was subsequently refined by passing through the three datapoints acquired from visual inspections.
5.3 Development of NDE Performance Indicators As discussed earlier, relatively few indicators (such NBI Condition Rating and NBE Condition States) have been adopted to determine the performance of bridge components. Given these indicators are mostly defined by visual inspection, data-driven models to accurately estimate remaining service life of the components often accompany certain levels of errors. For example, corrosion or delamination of a concrete bridge deck is only distinguishable when the bridge deck’s surface spalls and shows physical changes. This subsurface issue cannot be seen by regular visual inspection. Therefore, the estimated service life of the deck using solely visual inspection data will be inaccurate. During the past two decades, several NDE techniques have been established that aim to characterize local material properties and/or identify material-level forms of deterioration or damage of bridge decks. NDE provides the ability to detect and characterize deterioration not visible from the surface (i.e. deterioration that would likely be missed through common visual inspection approaches). In order to quantitatively capture the result of NDE tests, Guncuski et al. [62] defined five different condition indexes for IE, ER, GPR, USW, and HCP on the scale of 0 (worst) to 100 (best). In essence, the established conversion builds a unified and scale-less determination of deck condition assessment for each of the NDE test types. During the current project, the NDE data collected though several rounds of data collection periods will be analyzed. That being said, the datapoints from a few rounds of initial data collections will not be fully included in defining the trend of certain behaviors (indicated by less highlighted color) given the fact that a bridge specimen might not have been fully cured and the levels of brine solution and moisture content were not properly controlled at the initial phase of specimen testing. The following sub-sections document the relationships utilized for determination of condition indexes. 5.3.1 Impact Echo (IE) IE is the primary NDE technique to detect and characterize deck delamination. The test is conducted using an impact source and a single, nearby receiver. Depending on the signal spectrum and energy perturbation, the condition is described as Good, Fair, Poor, or Serious [62]. The condition index with respect to delamination is calculated according to: IE delamination index = 𝐴𝐺𝑜𝑜𝑑 ×100+𝐴𝐹𝑎𝑖𝑟 ×50+𝐴𝑃𝑜𝑜𝑟 ×50+𝐴𝑆𝑒𝑟𝑖𝑜𝑢𝑠 ×0 𝐴𝑇𝑜𝑡𝑎𝑙
(11)
where 𝐴𝐺𝑜𝑜𝑑 , 𝐴𝐹𝑎𝑖𝑟 , 𝐴𝑃𝑜𝑜𝑟 , and 𝐴𝑆𝑒𝑟𝑖𝑜𝑢𝑠 = areas in good, fair, poor, and serious conditions. 𝐴𝑇𝑜𝑡𝑎𝑙 corresponds to the total surveyed area. Figure 1 subsequently demonstrates the calculated IE delamination index based on the periodic data collected from BEAST and scales it up against the performance curve that has been developed for the representative PA bridge in an actual time domain. Figure 41 is equipped with two vertical axes. The left axis denotes the NBI condition rating (green line). The right axis denotes the calculated condition index for periodic IE datasets that are collected from BEAST and scaled up to the actual time domain experienced by an inservice highway bridge (blue datapoints). The trend of blue data points verify that the condition of the deck decreases by the age of the BEAST specimen.
Figure 41. Condition index calculated based on IE delamination index
In addition, IE results are presented as a map of dominant peak frequency as a metric to describe the position of reflectors (internal defects, etc.). In this approach, the time-history response (Figure 42a) for each testing point is first passed through a fast Fourier transform (FFT) analysis followed by a band-pass filter (2-20 kHz) to clean and remove noise in the response (Figure 42b). The frequency that corresponds to the highest amplitude is then selected as the governing dominant peak frequency. Depending on the condition of the deck, the dominant peak frequency can be correlated with the bottom of the deck (i.e. no delamination) or indicative of delaminated zones. In this project, no condition index was defined for the IE – dominant peak frequency response.
(a)
(b) Figure 42. Schematic view of an IE test, a) time-history spectrum, b) frequency response As discussed previously, defect maps can objectively assess the condition of bridge decks and identify the main causes of deterioration. Therefore, it is vital to develop approaches capable of quantifying the condition of the bridge deck in terms of condition maps and indices. To that extent, the last quartile of each dataset (𝐴𝑆𝑒𝑟𝑖𝑜𝑢𝑠 ) is calculated and reported as the defect index. Figures 43 and 44 plot the estimated values of the defect index using the IE delamination index as well as the IE peak dominant frequency. It is clear from both figures that the damage index increases with the age of the specimen, indicating that more physical deck damage results from live load and freeze-thaw cycles.
Figure 43. Defect index calculated based on IE delamination index
Figure 44. Defect index calculated based on IE dominant peak frequency
5.3.2 Ground-Penetrating Radar (GPR) GPR technology has been implemented to assess concrete quality. GPR can provide a qualitative assessment of concrete condition and describe potential concrete deterioration, delamination, and corrosive environment [63]. The GPR assessment is based on the measurement of attenuation of electromagnetic waves as they propagate through concrete layers. The condition index from the GPR surveys is calculated according to: GPR index =
𝐴𝐺𝑜𝑜𝑑 ×100+𝐴𝐹𝑎𝑖𝑟 ×70+𝐴𝑃𝑜𝑜𝑟 ×40+𝐴𝑆𝑒𝑟𝑖𝑜𝑢𝑠 ×0 𝐴𝑇𝑜𝑡𝑎𝑙
(12)
where 𝐴𝐺𝑜𝑜𝑑 , 𝐴𝐹𝑎𝑖𝑟 , 𝐴𝑃𝑜𝑜𝑟 , and 𝐴𝑆𝑒𝑟𝑖𝑜𝑢𝑠 = areas with GPR signal attenuation (normalized dB) ranges of > -15, -15 to -17, -17 to -20, and <-20, respectively [64]. 𝐴𝑇𝑜𝑡𝑎𝑙 corresponds to the total surveyed area. Figure 45 below plots the variation of condition index for the different rounds of GPR data collection efforts from BEAST and compares it against the performance curve of the representative bridge. While the condition index decreases by the age of the bridge, it should be noted the variation is not significant and only changes between 99% and 100%. In addition, Figure 46 plots the estimation of defect index that was thoroughly discussed in the previous section. The range of defect index variation is somewhere between 16-21% and the trend in the data does not support the increase of defect index versus the increase of deck age. This conclusion was expected given that the level of moisture in the deck was not controlled due to the level of exposed brine solution. Therefore, the GPR results might hold some level of significant variability.
Figure 45. Condition index calculated based on GPR depth-corrected amplitude
Figure 46. Defect index calculated based on GPR depth-corrected amplitude Alternatively, the GPR technology is able to detect the location of the rebar, resulting in the cover depth and its variation along the bridge length. Such measurements were also made during
the GPR data collection from the BEAST. Figure 47 below plots the estimated defect index for the GPR cover measurements. The results indicate that the increase in the defect index is wellcorrelated with the age of the deck. Figure 48 additionally plots the variation of average cover depth based on the measurements conducted in different rounds of data collections. Except for the first few datapoints (that could accompany some levels of uncertainty due to the level of moisture in the specimen), the results indicate a slight decrease in the estimated cover depth. This phenomenon could be associated with the decrease of cover mainly due to deterioration induced by a live load. No condition index was defined for the GPR – cover maps.
Figure 47. Defect index calculated based on GPR cover depth measurements
Figure 48. Average GPR cover depth collected from multiple rounds of data collection periods 5.3.3 Electrical Resistivity (ER) ER technology has long been used to assess the corrosive environment inside concrete specimens. The low resistivity of concrete is often a consequence of high moisture and chloride concentration. High electrical resistivity allows only low corrosion current to pass between the anodic and cathodic areas of the reinforcing steel, and thus does not facilitate the corrosive processes. To quantify the overall deck condition with respect to the corrosive environment and anticipated corrosion rate, the following expression has been introduced: ER index 𝐴𝑉𝑒𝑟𝑦 𝐿𝑜𝑤 × 100 + 𝐴𝐿𝑜𝑤 × 75 + 𝐴𝑀𝑜𝑑𝑒𝑟𝑎𝑡𝑒 × 50 + 𝐴𝐻𝑖𝑔ℎ × 25 + 𝐴𝑉𝑒𝑟𝑦 𝐻𝑖𝑔ℎ × 0 = 𝐴𝑇𝑜𝑡𝑎𝑙
(13)
where 𝐴𝑉𝑒𝑟𝑦 𝐻𝑔𝑖ℎ , 𝐴𝐻𝑔𝑖ℎ , 𝐴𝑀𝑜𝑑𝑒𝑟𝑎𝑡𝑒 , 𝐴𝐿𝑜𝑤 and 𝐴𝑉𝑒𝑟𝑦 𝐿𝑜𝑤 = areas with ER in ranges of <5 kΩ.cm, 5-10 kΩ.cm, 10-20 kΩ.cm, 20-30 kΩ.cm; and >30 kΩ.cm, respectively. 𝐴𝑇𝑜𝑡𝑎𝑙 corresponds to the total surveyed area. Figure 49 below plots the variation of condition index for the different rounds of ER data collection efforts from BEAST and compares it against the performance curve of the representative bridge. The condition index variation is somewhere between 0-30% and the trend in the data does not support the expected decrease of condition index versus the increase of deck age. Similar to GPR amplitude, this conclusion was expected given the fact that the level of moisture in the deck was not controlled due to the level of exposed brine solution. Therefore, the ER result might hold some levels of significant variability. Figure 50 subsequently plots the defect index estimation based on the approach previously discussed. As it is clear from this figure, the damage index is increasing by the age of the specimen, indicating that a more corrosive environment is present in the deck as a result of chloride penetration and rebar corrosion.
Figure 49. Condition index calculated based on ER
Figure 50. Defect index calculated based on ER
5.3.4 Ultrasonic Surface Wave (USW) The USW method is an offshoot of the spectral analysis of surface waves (SASW) method used to evaluate material properties (elastic moduli) in the near surface zone. SASW utilizes the dispersive properties of surface wave, i.e., the velocity of propagation as a function of wavelength, in layered systems to obtain information about layer thickness and elastic moduli. To quantify the overall deck condition with respect to the concrete elastic moduli, the following expression has been introduced: USW index =
𝐴𝐿𝑜𝑤 × 0 + 𝐴𝑀𝑜𝑑𝑒𝑟𝑎𝑡𝑒 × 50 + 𝐴𝐻𝑖𝑔ℎ × 100 𝐴𝑇𝑜𝑡𝑎𝑙
(14)
where 𝐴𝐻𝑔𝑖ℎ , 𝐴𝑀𝑜𝑑𝑒𝑟𝑎𝑡𝑒 , and 𝐴 𝐿𝑜𝑤 = areas with elastic moduli in ranges of >4400 ksi (30 GPa) 2900-4400 ksi (20-30 GPa); and <2900 ksi (20 GPa), respectively. 𝐴𝑇𝑜𝑡𝑎𝑙 corresponds to the total surveyed area. Changes in concrete quality described by the elastic modulus measurement using USW is depicted in Figure 51 by plotting the average elastic moduli for the entire deck of the BEAST during multiple rounds of data collection. It is obvious that the overall deck condition, in terms of average elastic moduli, decreases as the deck ages. Similarly, Figure 52 below plots the variation of condition index for the different rounds of USW data collection efforts from BEAST and compares it against the performance curve of the representative bridge. Alternatively, as shown in Figure 53, the defect indexes indicate the locations of the bridge that are rated deficient and are in need of immediate attention. A review of all three plots reveals expansion of zones with low moduli from BEAST commission up to 2 million cycles of live load passage. In fact, many of the low-moduli zones are more of an indication of delamination than an actual measure of concrete modulus.
Figure 51. Average elastic modulus calculated based on USW for the entire BEAST deck area
Figure 52. Condition index calculated based on USW – elastic modulus
Figure 53. Defect index calculated based on USW – elastic modulus
5.3.5 Half Cell Potential (HCP) Similar to ER, HCP technology has long been used as the indicator of corrosion progression inside concrete specimens. Equally, the condition index, with respect to corrosion activity, was defined as HCP index =
𝐴90% sound × 100 + 𝐴𝑇𝑟𝑎𝑛𝑠𝑖𝑡𝑖𝑜𝑛 × 50 + 𝐴90% Corrosion × 0 𝐴𝑇𝑜𝑡𝑎𝑙
(15)
where 𝐴90% sound, 𝐴Transition , and 𝐴90% corrosion = areas with HCP in the ranges of >200 mV, −350 to −200 mV; and < − 350 mV, respectively. Figures 54 and 55 plot the variation of condition index and damage index for the different rounds of HCP data collection efforts from BEAST and compare it against the performance curve of the representative bridge. As is clear from both figures, the HCP measurements strongly indicate that corrosion progresses within the deck components as the BEAST specimen is subjected to increased levels of live loads, brine solution, and freeze-thaw cycles.
Figure 54. Condition index calculated based on HCP measurements
Figure 55. Defect index calculated based on HCP measurements 5.3.6 Multi-sensor Data Fusion for NDE As discussed in the previous section, the NDE test results from a variety of technologies can be integrated and fused using the proposed Data Fusion methodology. To that end, the NDE test results from multiple NDE tests discussed above were fused, and the corresponding Damage Density Index was determined. Figure 56 plots the damage index results for the periodic NDE data collection from BEAST and demonstrates the trend of this index against the predicted deck condition. The results portray the higher sensitivity of the fused damage index in terms of remaining service life of the bridge. Thus, it can be successfully employed for the identification of deck deterioration.
Figure 56. Plot of Fused Damage Density Index
5.4 Drone-based HD and IR Image Integration As shown in Table 2, a drone based platform was deployed to collect data in the later stages of the BEAST experiment. The procedure for the configuration, data collection, and postprocessing of the HD, as well as the IR images, were thoroughly discussed in the previous sections. In this section, the damage indexes calculated based on the HD and IR images are plotted in Figure 57 and compared against the predicted performance of the representative bridge. A quick review of the results indicates the superiority of IR as compared to HD images for tracking the progression of damage in the surface and subsurface layers. In other words, once the damage is created in the surface of the concrete deck, no matter the depth and width, the HD image will only quantify the extent of the damaged area. In the case of IR images, however, the exact damaged area can be tracked as deterioration progresses. Therefore, it is more robust to employ the IR technique to quantify the surface and subsurface concrete damage.
Figure 57. Defect index calculated from HD and IR images
5.6 NDE Data Analysis Summary and Conclusion In the previous subsections, the periodic data collected using multiple NDE techniques were thoroughly analyzed and assessed against the life cycle of a representative bridge. In essence, the deployed NDE techniques included a variety of contact and non-contact techniques. Consequently, multiple NDE performance indicators were developed and compared. After a careful review of the results and performance indicators, the criteria shown in Table 8 were set. Table 8 is developed based on the review of the NDE performance indexes discussed in the previous subsections. The performance indexes were compared against the predicted performance of the bridge deck (during real work in-service conditions) and the frequency and timing of NDE assessment were identified. It should be noted that the proposed criteria were set based on very preliminary collection and analysis of the data from BEAST with its unique structural characteristics and testing scheme. The extension of such criteria for actual bridges under different loading circumstances shall be further evaluated. The table consists of six main columns, where each column sets certain thresholds for each NDE technology. The first and second columns refer to the proposed timing of NDE assessment in terms of age and rating of the deck, respectively. Third column subsequently proposes the time interval between consecutive NDE assessment periods. The proposed time interval between consecutive NDE assessment was determined based on the change sensitivity of a given NDE technology during the age of the concrete deck that has been observed within the aforementioned performance plots. The fourth column also indicates another criterion to specific changes in a given NDE technology that could also trigger another set of required NDE assessment. Ultimately, the last two columns correspond to defect and condition indexes, where the expected percentages were listed based on the observations made during BEAST experiment. Any values abnormally out of this range seem to trigger serious concern about the functionality of NDE
technology or the performance of the deck. It is worth mentioning that while the condition of BEAST’s deck is somewhere in the range of 4-5, it is still in operational condition. Any further decision on the extent of proposed thresholds toward developing recommendation for effective maintenance and preservation actions require further investigation and data collection. Table 8. Proposed Criteria for Quantification of Bridge Performance Recommendation of Frequency of NDE Assessment Timing of Timing NDE NDE after Triggering Addition Interval for Technolog Deck Condition al NDE Repeating y Constructio Rating trigger NDE n ER >15 Years <= 7 5-10 Yrs HCP >15 Years <= 7 5 Yrs GPR (Amplitude >20 Years <= 6 5 Yrs ) Drop in GPR >20 Years <= 6 5 Yrs average > (Cover) 0.5 inch Drop in USW >15 Years <= 7 5 Yrs average E > 250 ksi IE (Dominant >20 Years <= 6 2-5 Yrs Frequency) IE (Delaminat >20 Years <= 6 2-5 Yrs ion Index) Multiple >15 Years <= 7 5 Yrs NDE HD Image >20 Years <= 6 5 Yrs IR Image >20 Years <= 6 5 Yrs -
using NDE indexes and Range Range of of Defect Conditi Index on Index > 15% < 30% > 20% < 90% > 17%
-
> 0%
-
> 5%
< 95%
> 15%
-
> 2%
< 90%
> 7.5%
-
> 15% > 15%
-
6. Structural Inspection During the accelerated aging period, structural inspections for the BEAST superstructure were conducted on February 26th, 2021, and July 27th, 2021. The inspection condition rating for the superstructure was a 7. Some photos are shown in Figure 58. The changes in the continuously collected SHM data are not notable. As a result, the structural behavior of the BEAST is still in acceptable condition. In order to observe structural degradation, more cycles need to be applied to the BEAST.
(a) Superstructure, looking north
(b) Underside of Deck at Bay 3, looking south
Figure 58. Structural inspection of BEAST 7. Cost-Benefit Analysis of Proposed Bridge Inspection Framework In this section, the cost-benefit analysis of the proposed two-stage bridge inspection framework will be provided. As described in Section 1, the primary scanning (vision-based) will be conducted first. A rating of category #3 is “good” with light deterioration, and there is no need to conduct further inspections. Otherwise, inspections need to undergo secondary scanning using various NDE technologies. The vision-based results and fused NDE results will be combined to make the final decision based on Table 1. A rating of #2 is defined as “fair” with moderate deterioration. A rating of category #1 is defined as “poor” with extensive deterioration. Beyond #1, the condition is defined as “serious”. With the proposed bridge inspection framework, bridge inspections can be more efficient and cost-effective. The cost-benefit analysis of vision-based bridge inspection and NDE based bridge inspection will be provided as follows. 7.1 Cost-Benefit Analysis of Vision Based Bridge Inspection In recent years, bridge monitoring using unmanned aerial vehicles (UAVs) has gained significant attention. Bride inspection can be improved using UAVs that are integrated with a number of non-contact remote sensors. Even if drones are not allowed to fly over moving traffic, and MPT is still required, the duration of MPT is going to be greatly reduced. In the previous sections, we described our UAV-based bridge deck inspection framework, including data collection strategies and data interpretation methods. The feasibility of the developed framework has been assessed by comparing the results with NDE methods. UAV-based bridge inspection is recommended not only because it is rapid, but also because it is safe and cost-efficient. This has been proved by different researchers. Scott [65] reported that UAV-based structural inspection can increase safety, increase efficiency, save time, and save money. They conducted UAV-based inspection for high mast lighting around Liberty International airport. Table 9 shows the cost comparison between the UAV-based approach and a traditional approach. As shown, the UAV approach is much more cost-efficient than a traditional approach.
Table 9. Cost comparison between UAV-based approach and traditional approach for high mast lighting inspection
Perry et al. [66] compared the UAV-based approach and traditional human-based techniques for bridge inspection. Table 10 shows the cost comparison. Table 10. Cost comparison between UAV approach and traditional approach for inspection of the same bridge
In addition to a high cost-benefit ratio, a UAV-based inspection provides additional benefits such as high-quality photographs for analysis and documentation, fewer safety risks, lower vehicle emissions, etc.
The comparison of the traditional and the existing UAV-based inspection techniques highlights the unique advantages of the UAV-based techniques in terms of low cost, increased safety, and efficiency. The developed algorithms can accomplish in-depth data analytics to quantify and visualize damage automatically. 7.2 Cost-Benefit Analysis of NDE Bridge Inspection Table 11 summarizes the NDE cost analysis. The NDE unit costs were derived based on a preliminary analysis of past projects on a limited basis. The unit costs can be highly variable depending on the extent of testing, grid size, and location of the project. As shown in Table 11, traditional manual inspections occur at slow speed, and ground robotic inspection systems are at high speed.
2400 1800
600
4800 0.48
Field Cost
Data Reduction Cost (75%of Field Costs)
Reporting Cost (25% of Field Costs)
Total Cost
Unit Cost ($/sqft)
2400
Base Crew Rate($/day) 1
150
Staff Rate ($/hr)
Field Days
2
16000
Production Rate (sqft/day)
Base Crew Size
0 N/A
Sounding
Equipment Rental Fee/Day Grid Spacing
NDE Techniques
Assumed areas: 10000 sqft
9.72
97200
12150
36450
48600
20
2400
150
2
500
600 5’ x 5’
USW
4.87
48750
6093
18281
24375
10
2400
150
2
1000
375 5’ x 5’
IE
0.52
5200
650
1950
2600
1
2400
150
2
12000
200 N/A
IR
Low speed
2.24
24200
3025
9075
12100
5
2400
150
2
2000
100 5’ x 5’
ER
4.83
48300
6037
18112
24150
10
2400
150
2
1000
150 1’ x 1’
HCP
2.48
24850
3106
9318
12425
5
2400
150
2
2000
425 N/A
GPR
1.02
10200
3400
3400
3400
1
1920
150
2
16000
1000 N/A
Manufacturing 1 (VI, USW, IR, IE, ER, GPR)
0.33
3248
36.5
292
2920
1
2400
150
2
200000
1000 N/A
High speed Manufacturing 2 (VI, GPR, IR)
0.68
6800
850
2550
3400
1
2400
150
2
20000
Manufacturing 3 (VI, GPR, IR, ER, HCP, Sounding) 1000 N/A
Table 11. NDE cost analysis
8. Conclusion and Future Work In this project, a novel two-stage bridge deck inspection framework has been proposed. In the first stage, a vision-based fast inspection using UAV is proposed. Deep learning-based methods have been applied to develop an automated vision-based data interpretation system. In the second stage, multiple NDE techniques are proposed to conduct in-depth inspections. Statistical analysis for individual NDE data has been provided. A novel multi-resource NDE data fusion method has been developed. The feasibility of the proposed methods has been evaluated using data collected from the first full-scale bridge specimen, the BEAST. In addition, a UAV data collection strategy has been studied in this project. More importantly, the dissemination, integration, and interpretation of BEAST data has been conducted to correlate data from the BEAST with real bridges. Finally, the cost-benefit analysis of the proposed framework has been discussed. This study was conducted based on data collected only from the BEAST. Therefore, more work needs to be done in the future to verify the feasibility of the proposed methods. Several future work directions are recommended: 1. For vision-based methods, more data needs to be collected from different bridges to ensure the generalization of developed models. 2. For the drone data collection strategy, more rounds of data collection need to be conducted to draw a more reliable conclusion. 3. For multi-resource data fusion, the proposed methods need to be applied to more data collected from different bridges to further verify the feasibility. 4. It is important to develop vision-based methods capable of crack width determination. The proposed deep learning and image processing methods can be tuned later to extract crack pixels and a crack midline (i.e., crack skeleton). This can have additional cost for the state if results can be translated into element condition states. In addition, the BEAST is built with black rebar and normal strength concrete. The results obtained in this project can only be references for bridges similar to the BEAST. In the future, more work should be done to verify the feasibility of the proposed methods on other types of bridges. Several future work directions are recommended: 1. Retrofit the BEAST with epoxy coated rebar and high strength concrete and collect more data. 2. Build a new specimen similar to the BEAST using current design standards to collect more data. 3. Deploy the proposed methods to a number of actual bridges to collect more data.
Acknowledgement The research reported on in this paper was conducted under a project sponsored by the IRISE public/private research consortium. At the time of publication, the consortium included the Pennsylvania Department of Transportation, the Federal Highway Administration (ex officio), Allegheny County, the Pennsylvania Turnpike Commission, Golden Triangle Construction and Michael Baker International. IRISE was established in the Civil and Environmental Engineering Department in the University of Pittsburgh’s Swanson School of Engineering to study problems related to transportation infrastructure durability and resiliency. More information on IRISE can be found at: https://www.engineering.pitt.edu/irise/. Special thanks to Sun Ho Ro, the graduate student at University of Rutgers, for helping with data collection and processing. Disclaimer The contents of this report reflect the views of the authors who are responsible for the facts and the accuracy of the data presented herein. The contents do not necessarily reflect the official views or policies of any member of the IRISE research consortium at the time of publication. This report does not constitute a standard, specification, or regulation.
9 Appendix I: Methodology for single-class crack classifier 9.1 Convolutional Neural Networks Deep learning is a subset of ML capable of extracting higher level features from the raw data using a multi-layered artificial neural network structure. Deep learning is also known as deep neural learning or deep neural network. Among the deep learning techniques, CNN is the most well-known deep learning architecture. CNNs are inspired by biological processes of the animal visual cortex [67]. In visual processing, individual cortical neurons respond to specific respective fields. Then, the respective fields of different neurons overlap partially to cover the entire visual field. CNNs employ the mathematical operation convolution for general matrix multiplication in at least one of their layers [67]. A CNN usually consists of an input layer, convolutional layers, pooling layers, fully-connected layers and an output layer. Usually, the input layer is presented as a tensor of shape (number of images × image width ×image height ×image channels). Convolutional layers apply a number of filters to the local regions of inputs to extract feature maps of the images, with shape (number of images ×feature map width ×feature map height × number of filters). For example, input consists of A images with size M × N pixels and C color channels, the shape of input tensor will be A × M ×N × C. Suppose that the number of filters is k, the weights of filter i is W i , bi is the bias of filter i, xs denotes for the filter window patch, a is the activation function (like rectified linear unit (ReLU), sigmoid and tanh), the convolution of xs given filter i for each image is defined as: Zi,s = a(sum(W i xs ) + bi ) (1) By sliding the filter window through each image with patch window size f × f × C and stride size s (filter moving steps in each direction) for all dimensions, the convolutional output is with size N × (⌊
M−f s
+ 1⌋) × (⌊
N−f s
+ 1⌋) × k. Where (⌊ . ⌋) is the round down function. Figure 1
provides the convolution processes of convolutional layers.
(a) Overall convolution process for each image through convolutional layer
(b) Detailed convolution process for each individual filter Figure 1. Details of the convolutional layers Convolutional layers usually have a pooling layer. The pooling layers are applied to reduce the dimension of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer [68]. The pooling process can compute the maximum or average depending on the expectation of the outputs. Max pooling computes the maximum value from each local cluster of the previous layer and passes it to the next layer. Average pooling passes mean value of local cluster of the previous layer to the next layer. For example, max pooling of xs patch can be denoted as: pools = max (xs ) (2) According to previous studies [69], max pooling provides better performance than the average pooling on the image datasets. The last layers of CNN are fully-connected layers, which compute the class scores. Fully-connected layers connect every neuron in one layer to every neuron in another layer. The flattened matrix goes through a fully-connected layer to classify the images. The widely-used CNN structures normally consist of input layers followed by several convolutional layers, pooling layers, fully-connected layers, and output layers [70] [71]. Figure 2 shows an example of a CNN architecture.
Figure 2. An example of a CNN architecture Depending on the convolution dimension and direction, there are 1D, 2D, and 3D CNNs. 1DCNNs conduct 1-direction (along one axis) convolution calculation. 2D and 3D-CNNs calculate
convolutional values in 2 and 3 directions, respectively. In this study, a 1D-CNN is applied to extract and learn features of the flattened image frequency signals. Figure 3 simply shows an example of the 1D convolution.
Figure 3. 1D convolution for N examples 9.2 Long Short Term Memory RNNs are a class of deep learning that are suitable for sequential data. General RNNs have shortterm memory issues. If a sequence is very long, it will be difficult for RNNs to carry information from the earlier steps to the later steps. In other words, if the processing signal is very long, RNNs may lose some important information from the beginning. Moreover, RNNs suffer from the vanishing gradient problem during back propagation. For example, if the gradient value becomes extremely small, the learning process would not be improved significantly. In typical RNNs, small gradients can stop the layers’ learning process. To solve these issues, an LSTM approach [72] can be used. LSTM is an invariant RNN architecture with internal gates that can be used to regulate the flow of information. LSTM consists of three thresholding structures to filter out empty inputs and redundant information and fuse similar information. Figure 4 shows the working mechanism of LSTM. Ct−1 and Ct are the cell states in the sequence, which act as conveyor belts to pass on the information. Three gates are designed to add information to or remove it from the cell state. Gates are composed of a sigmoid neural net layer and a pointwise multiplication operation. The output of the sigmoid layer is between 0 and 1 to describe how much of each component should be passed through. The first gate is called the “forget gate,” and decides what information needs to be removed. For example, if the output number equals 0, this component should be completely removed [72]. Eq. (3) shows the calculation. The second gate is used to decide what information needs to be added to the cell state. This consists of the updated output of the sigmoid layer and a new candidate created by a tanh layer. The combination of these two is added to update the state, as shown in Eqs. (4)–(6). The last step is to decide what information needs to be output. The cell state is passed through a sigmoid layer and a tanh layer to generate the final output, as presented in Eqs. (7) and (8).
ft = (W f ht −1 , X t + b f ) it = (Wi ht −1 , X t + bi )
Ct = tanh (Wc ht −1 , X t + bc ) Ct = ft Ct−1 + it C̃t
ot = (Wo ht −1 , X t + bo )
(3) (4) (5) (6) (7)
ht = ot tanh(Ct ) where
(8)
f t and it are the sigmoid layer outputs; Ct is the state that needs to be added; (
the sigmoid function;
Ct is the current state; ot is the sigmoid output of the cell state; ht is the
output of the current time step; W f ,
Wi , Wc , and W are the layer weights; ht −1 and X t are the o
output of the previous time step and the input of the current step, respectively; and b f , and
) is
bi , bc ,
bo are the bias terms [72]. The LSTM algorithm has been applied to deal with long
sequence signals such as natural languages [73], heart rate signals [74], and speech signals [75].
Figure 4. Mechanism of LSTM 9.3 Discrete Fourier Transform for Images Images can be transformed from the spatial domain to the frequency domain by conducting a discrete Fourier transform (DFT). In the frequency domain, the value and location are represented by sinusoidal relationships that depend upon the frequency of a pixel occurring within an image. In this domain, a pixel location is represented by its x- and y-frequencies, and its value is represented by an amplitude. Images can be transformed to the frequency domain in order to determine which pixels contain more important information and whether repeating patterns occur. In other words, in the frequency domain, we deal with the rate at which the pixel values are changing in the spatial domain. Since the frequencies of images relate to the pixel value changing rate, the images’ frequency components are divided into two parts: highfrequency components, which correspond to edges in an image; and low-frequency components, which correspond to smooth regions. Many researchers have applied this property to image filtering, compression, and reconstruction [76] [77] [78]. The DFT is a sampled Fourier transform; therefore, it does not contain all the frequencies forming an image, but only contains a set of samples that is large enough to fully describe the spatial domain image. The number of frequencies corresponds to the number of pixels in the spatial domain image, so the images in the spatial and Fourier domains are of the same size. For an image with size M × N, the 2D DFT of the image can be presented as follows:
F ( k , l ) = x =0
M −1
N −1 y =0
kx ly f ( x, y ) exp −i 2π + M N
(9)
kx ly where f ( x, y ) is the image pixel in the spatial domain, and exp −i 2π + is the basis M N
function corresponding to each point F ( k , l ) in the frequency domain. The equation can be interpreted as follows: The value of each point F ( k , l ) is obtained by multiplying the spatial image with the corresponding base function and summing the result. On the other hand, images in the frequency domain can be transformed back into the spatial domain. The inverse Fourier transform is given by the following equation: f ( x, y ) =
1 MN
M −1
N −1
k =0
l =0
kx ly F ( k , l ) exp i 2π + M N
(10)
For large images, a fast Fourier transform (FFT) usually reduces the dimension complexity and computation time. The FFT produces complex values, which include the real and imaginary part or the magnitude and phase. In image processing, only the magnitude is usually displayed, because it contains most of the geometric structure information of the spatial domain. The magnitude and phase can be presented as follows:
F (k, l ) = R (k, l ) + I (k, l ) 2
I (k, l ) R (k, l )
( k , l ) = tan −1
2
(11) (12)
Where |𝐹(𝑘, 𝑙)|is the magnitude, ∅(𝑘, 𝑙) is the phase, R ( k , l ) and I ( k , l ) are the real part and imaginary part of the FFT output. F ( k , l ) has a low frequency at the corners of the image but high-frequency areas in the center, which is inconvenient to interpret. Thus, the zero frequency will usually be shifted to the center. Figure 5 shows the shift of the frequency image center [79]. 10
Figure 5. A 2D image frequency center shift, where u and v are spatial frequency. Figure 6 demonstrates the significant difference between the frequency distribution of images with cracks and without cracks. As shown in Figure 6, the frequency spectrum has line-shaped sparks for the image with cracks but does not have such sparks for the image without cracks. Low frequencies corresponding to smooth background regions are eliminated using an HPF. Obviously, the main difference between the crack and non-crack images is in the high-frequency domain. A frequency filter ratio equal to 0.5 means that only the top 50% high frequencies are kept. Ratios from 0.2–0.7 are tested to obtain the best performance. Finally, 0.5 is selected to define the threshold. Any frequency higher than the threshold is kept, and all low frequencies are replaced with 0. After filtering, a frequency amplitude matrix is calculated for each image and flattened to a frequency amplitude vector. Figure 7 shows the original and filtered amplitudes of crack and non-crack images.
Figure 6. (a) Crack image; (b) frequency spectrum for crack image; (c) non-crack image; (d) frequency spectrum for non-crack image.
Figure 7. (a) Amplitude for crack image; (b) filtered amplitude for crack image; (c) amplitude for non-crack image; (d) filtered amplitude of non-crack image.
10. Appendix II: Methodology for multi-class classifier for crack and spall detection Similar to the methods described in Appendix I, 2D CNN is used here to extract features from images. Then LSTM is applied to feature level for fusion. During the feature extraction stage, the filter window size is always larger than the stride size, and there are overlaps of each step. This makes extracted feature blocks strongly dependent on each other. Extracted feature maps can be reshaped to a 2D map matrix, and each row of the matrix can be regarded as one input. In this sense, all rows of the feature map can be regarded as sequential input data to be passed to LSTM for feature fusion. Figure 8 shows the feature fusion mechanism of LSTM. The optimal CNNLSTM architecture is selected via an extensive trial-and-error approach and developed using TensorFlow modules in python 3.7. The overview of the developed network is shown in Figure 9. As shown in the figure, the feature extraction stage contains two convolutional layers followed by pooling layers. The kernel sizes 6 and 5 with stride size 2 are applied to first and second convolutional layers, respectively. Rectified Linear Unit (ReLU) activation function is applied to each convolutional layer. In the feature fusion stage, extracted features are flattened and simply fused to the desired size with a fully connected layer (FC). Feature vector is reshaped to a 2D feature matrix as sequential inputs for LSTM. Finally, a softmax layer classifies the image into intact, crack, or spall concrete surfaces. For the training process, a stochastic gradient descent (SGD) is chosen as an optimizer with a minibatch size of 32 out of 7200 images. An adaptive
and logarithmically decreasing learning rate is adopted to speed up the convergence. The initial learning rate and weight decay are 0.1 and 0.0001, respectively. A momentum value of 0.9 is applied to avoid overfitting and a dropout ratio equal to 0.5 is applied to the FC and LSTM layers.
Figure 8. Feature fusion mechanism of LSTM
Figure 9. Architecture of 2D CNN-LSTM Network
11. Appendix III: Methodology for subsurface defects detection 11.1 Xception Convolutional Neural Network (CNN) has been widely used to deal with vision-based tasks. Different networks may share similar sets of feature extraction layers, which are referred to as the backbone. There are some frequently used backbones, such as AlexNet [80] and VGG16/19 [81]. In this study, Xception [82] is selected as the backbone. Xception is a building block for deep nets developed by Google, which focus on the efficiency of convolution neural network by introducing the depth-wise separable convolutions. In other words, depth-wise separable convolution means convolution kernel is performed for each channel independently to extract spatial information and features. It consists of two steps: point-wise convolution and depth-wise convolution. As shown in Figure 10, 1x1 convolutions are applied to input to reduce the dimension first, and then n x n convolutions are applied to each channel to conduct depth-wise convolution. The extracted features are stacked to pass to the next layer.
Figure 10. Depth-wise separable convolution 11.2 DeepLabV3+ DeeplabV3+ is a powerful semantic segmentation module developed by Google [83], it utilizes an encoder-decoder architecture with Atrous spatial pyramid pooling (ASPP). ASPP is able to encode multi-scale contextual information. ASPP is Atrous convolution based spatial pyramid pooling. The top part of Figure 11 shows the Atrous convolution process. It can be presented in equation (13). y[j] = ∑k x[j + r ∗ n]w[n]
(13)
Where j is the location, n is the filter size, w is the filter weight, and r is the Atrous rate corresponding to the stride used to sample the input.
By utilizing the advantages of Atrous convolution and spatial pyramid pooling, ASPP is developed to robustly segment objects at multiple scales, as shown in Figure 12. The encoderdecoder architecture of DeepLab V3+ enables the location/spatial information to be discovered. In the encoder stage, ASPP is used to extract the local features, and the output of the encoder stage has a much smaller spatial resolution than input image resolution. In the decoder stage, the encoder features are upsampled and then concatenated with the corresponding low-level features extracted at the beginning. Figure 12 shows an example of encoder-decoder architecture.
Figure 11. Atrous Spatial Pyramid Pooling
Figure 12. Encoder-Decoder Architecture
12. Appendix IV: Multi-resource NDE Data Fusion Method 12.1 Discrete Wavelet Transform Based Image Fusion Discrete wavelet transform (DWT) based methods have been widely used for image fusions due to the capability to extract low and high frequency areas [84] [85] [86]. In discrete wavelet transform, the multiresolution wavelet decomposition is introduced to 2-D images by applying scale functions and wavelet functions along the rows and columns [87], as shown in equations (14)-(17). ɸLL (x, y) = ɸ(x)ɸ(y)
(14)
ψ𝐿𝐻 (𝑥, 𝑦) = ɸ(x)ψ(y)
(15)
ψ𝐻𝐿 (𝑥, 𝑦) = ψ(x)ɸ(y)
(16)
ψ𝐻𝐻 (𝑥, 𝑦) = ψ(x)ψ(y)
(17)
Where ɸ(∙) is the scaling function, ψ(∙) is the wavelet function. The 2D discrete wavelet analysis operation consists of horizontal and vertical filtering and down-sampling. In the horizontal operation, 1D low-pass filter L and high-pass filter H are applied to each row in the image I (x, y) to create the coefficient matrices I𝐿 (x, y) and I𝐻 (x, y). Vertical down-sampling and filtering follows, applying filter L and filter H to each column in I𝐿 (x, y) and I𝐻 (x, y) . As a result, the original image is decomposed into four sub images I𝐿𝐿 (x, y), I𝐿𝐻 (x, y), I𝐻𝐿 (x, y), I𝐻𝐻 (x, y). I𝐿𝐿 (x, y) can be considered as a smoothed and downsampled version of the original image, it represents the approximation of I (x, y). I𝐿𝐻 (x, y), I𝐻𝐿 (x, y), I𝐻𝐻 (x, y) are detail sub images, and they represent horizontal, vertical and diagonal details of I (x, y) respectively. Figure 13 shows the deposition workflow.
Figure 13. Discrete wavelet decomposition Wavelet fusion is used to combine and fuse multiresolution decomposition images [88]. The main concept of wavelet fusion is to generate new coefficient for a fused image based on the
decomposed coefficient of the source images. In the fusion process, coefficients for each decomposed sub image are combined separately using different fusion rules. As shown in Figure 14, images I1 (x, y) and I2 (x, y) are decomposed into four sub images respectively. Sub image I1𝐿𝐿 (x, y) from image 1 will be combined with I2𝐿𝐿 (x, y) from image 2 using the fusion rule ɸ, fused sub image IF𝐿𝐿 (x, y) is obtained by equation (25). It is the same for other sub images to be fused. Then, the fused sub images are used to construct the new image. 𝐼𝐹𝐿𝐿 = [ ɸ{I1𝐿𝐿 (x, y), I2𝐿𝐿 (x, y)}]
(18)
Figure 14. Wavelet Image Fusion 12.2 Improved D-S Theory Dempster-Shafer Evidence Theory is a framework for reasoning with uncertainty, which is considered to be an inexact derivation of probability theory and Bayesian reasoning [89]. DS theory was first introduced by Dempster [90] in the context of statistical inference and further developed by Shafer [91] [92]. In this study, the improved D-S theory proposed by Ye et al. [93] will be used. DS theory is developed based on a finite and non-empty set. It contains M mutually exclusive and exhaustive hypotheses. Θ = {𝐻1 , 𝐻2 , … , 𝐻𝑀 } Where 𝑀 is the number of hypotheses, 𝐻𝑖 (𝑖 = 1,2, … , 𝑀) is the 𝑖𝑡ℎ hypothesis. Based on the definition of the frame, the power set 2Θ can be derived as
(19)
2Θ = {∅, {𝐻1 }, {𝐻2 }, … , {𝐻𝑀 }, {𝐻1 , 𝐻2 }, … , {𝐻1 , 𝐻𝑀 }, … , {𝐻1 , 𝐻2 , … , 𝐻𝑀 }}
(20)
Where 𝐻 ⊆ 𝛩, 𝐻 ∈ 2𝛩 . Mass function is introduced based on 2Θ frame to describe the support degree of each hypothesis, which is also called basic probability assignment. Mass function is a function 𝑚: 2Θ → [0,1] that satisfies: 𝑚(∅) = 0
(21)
∑𝐻 ⊆ 𝛩 𝑚(𝐻) = 1
(22)
Where 𝑚(𝐻) is the basic support degree of evidence m to hypothesis 𝐻. Assuming the system frame is Θ = {𝐻1 , 𝐻2 , … , 𝐻𝑀 }, the Lance distance between evidence 𝑚𝑖 , 𝑚𝑗 is: |𝑚 −𝑚 |
1
𝑖𝑡 𝑗𝑡 𝑑𝑖𝑗 (𝐿) = 𝑀 ∑𝑀 𝑡=1 (𝑚 +𝑚 ) 𝑖𝑡
𝑗𝑡
(23)
Then, reliability degree of evidence 𝑚𝑖 can be calculated: 𝐷𝑖
𝑟𝑒𝑙(𝑚𝑖 ) = ∑𝑁
(24)
2 𝐷𝑖 = √∑𝑁 𝑗=1,𝑗≠𝑖(1 − 𝑑𝑖𝑗 )
(25)
𝑖=1 𝐷𝑖
Where 𝐷𝑖 is:
𝐷𝑖 is a decreasing function of distance between evidence 𝑚𝑖 and other evidence, 𝑟𝑒𝑙(𝑚𝑖 ) reflects the reliability of evidence 𝑚𝑖 . The original evidence can be modified as: 𝑀1 (𝐻𝑗 ) = ∑𝑁 𝑖=1 𝑚𝑖 (𝐻𝑗 ) × 𝑟𝑒𝑙(𝑚𝑖 )
(26)
Similarity is another approach to measure the correlation of two evidence. In the improved DS theory, spectral angle cosine function is used to measure the similarity degree between evidence 𝑚𝑖 , 𝑚𝑗 : 𝑆𝑖𝑗 = 𝑠𝑖𝑚(𝑚𝑖 , 𝑚𝑗 ) =
∑𝑀 𝑡=1 𝑚𝑖𝑡 ×𝑚𝑗𝑡 2 2 𝑀 √∑𝑀 𝑡=1 𝑚𝑖𝑡 ×√∑𝑡=1 𝑚𝑗𝑡
(27)
Then the support degree and credibility of evidence 𝑚𝑖 can be calculated as: sup(𝑚𝑖 ) = ∑𝑁 𝑗=1,𝑗≠𝑖 𝑆𝑖𝑗
sup(𝑚𝑖 )
𝑐𝑟𝑑(𝑚𝑖 ) = ∑𝑁
𝑖=1 sup(𝑚𝑖 )
(28)
(29)
Support degree reveals that if one piece of evidence is close to other evidence, then the piece of evidence has a high level of support from other evidence. Credibility is higher if the evidence has higher level support degree. Since 𝑐𝑟𝑑(𝑚𝑖 ) meets the normalization condition∑𝑁 𝑗=1,𝑗≠𝑖 𝑐𝑟𝑑( 𝑚𝑖 ) = 1 , the revised evidence 𝑀2 (𝐻𝑗 ) based on spectral angle cosine function is: 𝑀2 (𝐻𝑗 ) = ∑𝑁 𝑖=1 𝑚𝑖 (𝐻𝑗 ) × 𝑐𝑟𝑑(𝑚𝑖 )
(30)
As the correction of original evidence has been finished, the DS fusion method should be applied. In the improved DS theory, an improved conflict redistribution strategy [94] to avoid the counterintuitive results that are caused by the normalization step: 𝑀(𝐻) = ∑𝐻𝑖 ∩𝐻𝑗=𝐻 𝑀1 (𝐻𝑖 )𝑀2 (𝐻𝑗 ) + ∆(𝐻)
(31)
Where ∆(𝐻) is the conflict redistribution indicator: ∑𝐻∩𝐶=∅ 𝑀1 (𝐻)𝑀2 (𝐶) ∆(𝐻) = {∑𝐶∩𝐻=∅ 𝑀1 (𝐶)𝑀2 (𝐻) ∑𝐻∩𝐶=∅ 𝑀1 (𝐻)𝑀2 (𝐶)/2
𝑀1 (𝐻) > 𝑀2 (𝐶) + 𝛾 𝑀1 (𝐶) < 𝑀2 (𝐻) − 𝛾 |𝑀1 (𝐻) − 𝑀2 (𝐶)| < 𝛾
(32)
Where 𝛾 ∈ [0, 1] is the threshold of conflict redistribution. The value of 𝛾 can be set based on prior knowledge, or can be generally selected in [0.1, 0.5]. 13. Appendix V: Methodology of 3D model development 13.1 Aerial Image Processing Typical aerial image processing steps for aerial triangulation and digital model reconstruction are well explained in [95] [96]. Nowadays, most UAV image files contain Exchangeable Image File Format (EXIF) data, which is standard information to define the image and camera (e.g., camera status, exposure, data, altitude, date, GPS). In some cases, commercial UAV control software automatically generates a log file that provides the position and orientation of the image separately. The symmetrical structure of the BEAST facility, especially the identical logos printed on both sides, can result in high RMS errors during the aerial triangulation process. These errors occur when the algorithm becomes confused with the matching of the orientation and angle of the image. Therefore, ten 1ft x 1ft Ringed Automatically Detected (RAD) targets were printed and attached to the structure to be used as a manual survey point to enhance the 3D estimation process. Details of RAD target and image tie process are found in [97]. Figure 15 shows the RAD target attached to the BEAST. Figure 16 shows the manual setting process for tie points. The EXIF data and log files of 38 oblique images and 22 ground-level images, with at least 70 % overlap, were utilized for the aerial triangulation process (a mathematical protocol for stereo model densification control to reestablish the true position and orientation of the images) in a commercial software Bently Context Capture, shown in Figure 17. Aerial triangulation produces numerous automatic tie points to optimize and correlate multiple images together. The model
created 6,073 automatic tie points with an average RMS of reprojection error of 0.6 px and RMS of distortion to raise of 0.034 u, as shown in Figure 18.
Figure 15. RAD Targets on the BEAST Facility
(a) RAD Target
(b) Target Tie Point Generation Figure 16. RAD Target and Manual Tie Point
Figure 17. Aerial Triangulation Process
Figure 18. Generated Automatic Tie Points 13.2 BEAST Facility Digital Models Following aerial triangulation, oriented images were used to generate a Digital Surface Model (DSM), which provides a detailed surface representation of the structure, as shown in Figure 19. The unnecessary meshes were trimmed and polished using commercial software Unity and Blender, as shown in Figure 20. In order to overlay the thermal of the concrete deck, the top 1” of the concrete deck on the DSM was designated to be a separate group. Then, a separate block created from the thermal images of the concrete deck was merged into the DSM, as shown in Figure 21.
Figure 19. Completed Digital Surface Model of the BEAST Facility with Camera Positions
Figure 20. Completed Digital Surface Model of the BEAST Facility
Figure 21. 3D Model with thermal images 14. Appendix VI: UAV data collection plans 14.1 Equipment Specifications Platform: DJI Matrice 600 Pro Camera system 1: Zenmuse X5 camera, Olympus M.Zuiko Digital 45mm F1.8 lens Camera system 2: FLIR Vue Pro R (640, 45 degrees FOV, 13mm, 30Hz), Gremsy S1V2 Gimbal Software: DJI control app for drone control. For the camera and gimbal control, I am using DJI Channel Expansion Kit to control the camera shutter through a PWM connection 14.2 Drone Altitude Calculate Instantaneous Field of View (IFOV) from Field of View (FOV) IFOV = (FOV / number of pixels*) x [(3.14/180)(1000)] *Use camera’s horizontal pixel resolution as “number of pixels.” *Unit of IFOV is in milliradians Applying the equation to FLIR Vue Pro R (640 x 480)
IFOV = (45 / 640) x [(3.14/180)(1000)] = 1.2266 mrad Then, convert IFOV to inches IFOV (inches) = (IFOV_mrad/1000) x (distance from the target in inches) For example, if the drone altitude is 40ft, then IFOV (inches) = (1.2266/1000) x (40x12) = 0.589 This gives the Sport Size Ratio (SSR) = 0.259:480. This number is the measurable size of one single pixel (1 x 1). In other words, the camera can measure a 0.589-inch spot from 40 feet away. However, FLIR recommends that the spot value requested is at least 3 x 3 pixels (1x1 pixel measurement is often inaccurate). SSR for 1x1 pixel SSR for 3x3 pixel Distance(ft) SSR_1 Distance(ft) SSR_3 10 0.147192 10 0.441576 20 0.294384 20 0.883152 30 0.441576 30 1.324728 40 0.588768 40 1.766304 50 0.73596 50 2.20788 60 0.883152 60 2.649456 70 1.030344 70 3.091032 80 1.177536 80 3.532608 90 1.324728 90 3.974184 100
1.47192
100
4.41576
14.3 General Framework A. Use 16-bit TIFF for the best result. B. Conversion: all images should have the dynamic scale range to ensure that a temperature value corresponds to the same digital number (DN) value in all images. C. Update/combine EXIT data: positioning information derived from the drone GPS log files. 14.4 Drone data collection Plan for IR images
Time Distance from Deck (feet)* Camera angles Overlap Deck condition
Collection #1 Morning (10-12)
Collection #2 Collection #3 Afternoon (3-5 Evening pm) 30/40/50/60/70/80 30/40/50/60/70/80 30/40/50/60/70/80 Vertical / Oblique 75% Dry Wet (if possible)
Vertical / Oblique 75% Dry Wet (if possible)
Vertical / Oblique 75% Dry/Wet (if possible)
HD Images
Same setting as IR image (only before sunset)
*Distance from the deck 20 feet: extreme local area 40 feet: local area 80 feet: an entire deck Note: Camera angles Vertical: The drone is flying over the bridge deck, and the camera is vertically downwards (0degree gimbal setting).
Oblique: The drone is flying over the bridge deck, and the camera angle varies: 20 to 60 degrees.
15. References [1] "ASCE'S 2017 infrastructure report card: bridges," 2017. [Online]. Available: https://www.infrasturecturereportcard.org/cat-item/bridges/ . [2] J. Fleming, "Bridge Management System (BMS2) Coding Manual (Publication 100A)," 2019. [3] A. Ellenberg, A. Kontsos, I. Bartoli and A. Pradhan, "Masonry Crack Detection Application of an Unmanned Aerial Vehicle," in International Conference on Computing in Civil and Building Engineering, 2014. [4] H. Kim, S. Sim and S. Cho, "Unmanned aerial vehicle (UAV)powered concrete crack detection based on digital image processing," in 6th International Conference on Advances in Experimental Structural Engineering, Chicago, 2015. [5] S. Iyer and S. K. Sinha, "A robust approach for automatic detection and segmentation of cracks in underground pipeline images," Image Vis. Comput., vol. 23, no. 10, p. 921–933, 2005. [6] M. Salman, S. Mathavan, K. Kamal and M. Rahman, "Pavement crack detection using the Gabor filter," in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), 2013. [7] A. M. Talab, Z. Huang, F. Xi and L. Ming, "Detection crack in image using Otsu method and multiple filtering in image processing techniques," Optik, vol. 127, no. 3, p. 1030–1033, 2016. [8] B. Shan, S. Zheng and J. Ou, "A stereovision-based crack width detection approach for concrete surface assessment," KSCE J. Civ. Eng., vol. 20, no. 2, p. 803–812, 2016. [9] S. K. Sinha and P. W. Fieguth, "Automated detection of cracks in buried concrete pipe images," Autom. Constr., vol. 15, no. 1, p. 58–72, 2006. [10] Q. Zou, Y. Cao, Q. Li, Q. Mao and S. Wang, "CrackTree: Automatic crack detection from pavement images," Pattern Recognit. Lett., vol. 33, no. 3, p. 227–238, 2012. [11] Y. Fujita and Y. Hamamoto, "A robust automatic crack detection method from noisy concrete surfaces," Mach. Vis. Appl., vol. 22, no. 2, p. 245–254, 2011. [12] J. B. Butcher, C. R. Day, J. C. Austin, P. W. Haycock, D. Verstraeten and B. Schrauwen, "Defect detection in reinforced concrete using random neural architectures," Comput. Aided Civ. Infrastruct. Eng., vol. 29, no. 3, p. 191–207, 2014. [13] X. Jiang and H. Adeli, "Pseudospectra, MUSIC, and dynamic wavelet neural network for damage detection of highrise buildings," Int. J. Numer. Methods Eng., vol. 71, no. 5, p. 606–629, 2007.
[14] S. W. Liu, J. H. Huang, J. C. Sung and C. C. Lee, "Detection of cracks using neural networks and computational mechanics," Comput. Methods Appl. Mech. Eng., vol. 191, no. 2526, p. 2831–2845, 2002. [15] H. Moon and J. Kim, "Intelligent crack detecting algorithm on the concrete crack image using neural network," in 28th ISARC, 2011. [16] M. O’Byrne, B. Ghosh, F. Schoefs and V. Pakrashi, "Regionally enhanced multiphase segmentation technique for damaged surfaces," Comput.-Aided Civ. Infrastruct. Eng., vol. 29, no. 9, p. 644–658, 2014. [17] M. Maguire, S. Dorafshan and S. Thomas, "SDNET2018: a concrete crack image dataset for machine learning applications," Logan: Utah State, 2018. [18] A. Das, "Interpretation and processing of image in frequency domain. In: Guide to Signals and Patterns in Image Processing," Berlin, Springer, 2015, p. 93–147. [19] M. Abadi , A. Agarwal, P. Barham, E. Brevdo, Z. Chen and C. Citro, "TensorFlow: large-scale machine learning on heterogeneous distributed systems," arXiv, p. 1603.04467, 2016. [20] Q. Zhang, K. Barri, S. K. Babanajad and A. H. Alavi, "Real-Time Detection of Cracks on Concrete Bridge Decks Using Deep Learning in the Frequency Domain," Engineering, p. DOI: https://doi.org/10.1016/j.eng.2020.07.026, 2020. [21] A. Buades, B. Coll and J. M. Morel, "Non-Local Means Denoising," Image Processing On Line, vol. https://doi.org/10.5201/ipol.2011.bcm_nlm, p. 208–212, 2011. [22] T. R. Singh, S. Roy, O. I. Singh, T. Sinam and K. M. Singh, "A New Local Adaptive Thresholding Technique in Binarization," International Journal of Computer Science, vol. 8, no. 6, pp. 1694-0814, 2011. [23] P. J. Clark and F. C. Evans, "Distance to Nearest Neighbor as a Measure of Spatial Relationships in Populations," Ecology, vol. 35, no. 4, pp. 445-453, 1954. [24] Q. Zhang and A. Alavi, "Automated two-stage approach for detection and quantification of surface defects in concrete bridge decks," in Nondestructive Characterization and Monitoring of Advanced Materials, Aerospace, Civil Infrastructure, and Transportation XV, Long Beach, CA, USA, 2021. [25] N. Gucunski and N. R. Council, "Nondestructive testing to identify concrete bridge deck deterioration," Transportation Research Board, 2013. [26] S. A. Dabous, S. Yaghi, S. Alkass and O. Moselhi, "Concrete bridge deck condition assessment using IR Thermography and Ground Penetrating Radar technologies," Automation in Construction, vol. 81, pp. 340-354, 2017. [27] T. Omar and M. L. Nehdi, "Clustering-Based Threshold Model for Condition Assessment of Concrete Bridge Decks Using Infrared Thermography," in Proc., International Congress and
Exhibition" Sustainable Civil Infrastructures: Innovative Infrastructure Geotechnology", Springer, 2017. [28] G. Washer, R. Fenwick, S. Nelson and R. Rumbayan, "Guidelines for thermographic inspection of concrete bridge components in shaded conditions," Transportation Research Record: Journal of the Transportation Research Board, pp. 13-20, 2013. [29] G. G. Clemena and W. T. McKeel, "Detection of Delamination in Bridge Decks with Infrared," Transportation Research Record, vol. 1, pp. 180-182, 1978. [30] M. Everingham, L. VanGool, C. K. Williams, J. Winn and A. Zisserman , "The PASCAL Visual Object Classes (VOC) Challenge 2012," International Journal of Computer Vision, vol. 2, no. 88, pp. 303-338, 2010. [31] M. Cordts, M. Omran, S. Ramos, T. Rehfeld , M. Enzweiler, R. Benenson, U. Franke, S. Roth and B. Schiele, "The cityscapes dataset for semantic urban scene understanding," in IEEE Conference on Computer Vision and Patter Recognition (CVPR), Las Vegas, USA, 2016. [32] E. McLaughlin, N. Charron and S. Narasimhan, "Combining Deep Learning and Robotics for Automated Concrete Delamination Assessment," ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, vol. 36, pp. 485-492, 2019. [33]
MATLAB, "version 7.10.0 (2019a)," in The MathWorks Inc, 2019.
[34] M. Abadi, A. Agarwal, P. Barham, E. Brevdo , Z. Chen, C. Citro and et al, "TensorFlow: large-scale machine learning on heterogeneous distributed systems," arXiv:1603.04467, 2016. [35] Q. Zhang, K. Barri and Z. Wan, "A Deep Learning-based Autonomous System for Detection and Quantification of Delamination on Concrete Bridge Decks," in International Bridge Conference, Pittsburgh, PA, USA, 2021. [36] F. Lundh, "An introduction to com/library/tkinter/introduction/index. htm, 1999.
tkinter,"
URL:
www.
pythonware.
[37] M. J. Sansalone and W. B. Streett, "Impact-echo," Nondestructive Evaluation of Concrete and Masonry , 1997. [38] J. J. Daniels, "Ground Penetrating Radar Fundamentals," Prepared as an appendix to a report to the U.S.EPA, Region V, 2000. [39] B. Elsener, C. Andrade , J. Gulikers, R. Polder and M. Raupach, "Half-cell potential measurements- Potential mapping on reinforced concrete structures," Material and Structures, pp. 461-471, 2003. [40] S. Lee, N. Kalos and D. H. Shin, "Non-Destructive Testing Methods in the U.S. for Bridge Inspection and Maintenance," KSCE Journal of Civil Engineering , vol. 18, no. 5, pp. 1322-1331, 2014.
[41] M. R. Clark, D. M. McCann and M. C. Forde, "Application of infrared thermography to the non-destructive testing of concrete and masonry bridges," NDT & E International, vol. 36, no. 4, p. 265–275, 2003. [42] Q. Zhang and A. Alavi, "Automated two-stage approach for detection and quantification of surface defects in concrete bridge decks," in Nondestructive Characterization and Monitoring of Advanced Materials, Aerospace, Civil Infrastructure, and Transportation XV, Long Beach, 2021. [43] A. A. Hesse, R. A. Atadero and M. E. Ozbek, "Nondestructive Characterization and Monitoring of Advanced Materials, Aerospace, Civil Infrastructure, and Transportation XV," Nondestructive Characterization and Monitoring of Advanced Materials, Aerospace, Civil Infrastructure, and Transportation XV, vol. 20, no. 11, 2015. [44] M. Ahmed, O. Moselhi and A. Bhowmick, "Two-tier data fusion method for bridge condition assessment," Canadian Journal of Civil Engineering, vol. 45, no. 3, 2018. [45] S. Babanajad and F. Jalinoos, "A Framework for Assessing Corrosion and Damage in Concrete Bridge Decks Using Multi-Sensor NDE Data," In Preparation for peer-review journal submission, 2021. [46] F. Jalinoos, S. Babanajad and F. Moon, "QC/QA PROCEDURES FOR NDE CONTOUR MAPS COLLECTED THROUGH THE LTBP PROGRAM," FHWA Internal Technical Report , Office of Infrastructure Research and Development, FHWA, Washington DC, 2020. [47] S. Yaghi, "Integrated remote sensing technologies for condition assessment of concrete bridges," M.Sc. dissertation, Department of Building, Civil and Environmental Engineering, Concordia University., 2014. [48] Y. Deng, "A threat assessment model under uncertain environment," Math. Probl. Eng, vol. 201, pp. 1-12, 2015. [49] C. R. Parikh, M. J. Pont and N. B. Jones, "Application of Dempster–Shafer theory in condition monitoring applications: A case study," Pattern Recognit. Lett, vol. 22, pp. 777-785, 2001. [50] X. F. Fan and M. J. Zuo, "Fault diagnosis of machines based on D-S evidence theory. Part 2: Application of the improved D-S evidence theory in gearbox fault diagnosis," Pattern Recognit. Lett, vol. 27, pp. 377-385, 2006. [51] G. Dong and G. Kuang, "Target Recognition via Information Aggregation Through Dempster–Shafer’s Evidence Theory," IEEE Geosci. Remote Sens. Lett, vol. 12, pp. 1247-1251, 2015. [52] J. Hu, Z. Yu, X. Zhai and J. Peng, "Research of decision fusion diagnosis of aero-engine rotor fault based on improved D-S theory," Acta Aeronautica et Astronautica Sinica, vol. 35, pp. 436-443, 2014.
[53] B. Zhang, "Study on image fusion based on different fusion rules of wavelet transform," 3rd International Conference on Advanced Computer Theory and Engineering(ICACTE), pp. V3-649-V3-653, 2010. [54] Q. Zhang and A. Alavi, "Improving Bridge Assessment via Fusion of Multi-resource Nondestructive Evaluation," in Sreuctures Congress, Atlanta, GA,USA, 2022. [55] H. Shakhatreh, A. H. Sawalmeh, A. AI-Fuqaha, Z. Dou, E. Almaita, I. Khalil, N. S. Othman , A. Khreishah and M. Guizani , "Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges," IEEE Access , vol. 7, pp. 48572 - 48634, 2019. [56] P. S. Ramesh and J. V. Muruga Lal Jeyan, "Comparative analysis of the impact of operating parameters on military and civil applications of mini unmanned aerial vehicle (UAV)," in AIP Conference Proceedings, 2020. [57] E. Jeong, J. Seo and J. Wacker, "Literature Review and Technical Survey on Bridge Inspection Using Unmanned Aerial Vehicles," Journal of Performance of Constructed Facilities, vol. 34, no. 6, 2020. [58] S. Hiasa, R. Birgul and F. N. Catbas, "Infrared thermography for civil structural assessment: demonstrations with laboratory and field studies," J. Civ. Struct. Health Monit., vol. 6, no. 3, pp. 619-636, 2016. [59] G. Washer, R. Fenwick, S. Nelson and R. Rumbayan, "Guidelines for the thermographic inspection of concrete bridge components in shaded conditions," Transportation Research Record , pp. 12-20, 2013. [60] Q. Zhang, S. H. Ro, J. Gong, F. Moon and A. Alavi, "Recent Advances in Bridge Condition Assessment Using Unmanned Aerial Vehicles," in International Workshop of Structural Health Monitoring, Stanford, CA, USA, 2022. [61] LTBP InfoBridge, "Long-term Bridge Performance Program (LTBP) online webpage," Federal Highway Administration, [Online]. Available: https://infobridge.fhwa.dot.gov/Home. [62] N. Gucunski, G. R. Consolazio and A. Maher, "Concrete Bridge Deck Delamination Detection by Integrated Ultrasonic Methods," International Journal of Materials and Product Technology, vol. 26, no. 1-2, pp. 19-34, 2006. [63] K. R. Maser and A. Rawson, "Network Bridge Deck Surveys Using High Speed Radar: Case Studies of 44 Decks (Abridgement)," Transportation Research Record, vol. 1347, pp. 25-28, 1992. [64] N. Gucunski, B. Pailes, J. Kim, H. Azari and K. Dinh, "Capture and Quantification of Deterioration Progression in Concrete Bridge Decks through Periodical NDE Surveys," Journal of Infrastructure Systems, vol. 23, no. 1, pp. 1-11, 2016. [65] G. Stott, "Structural Inspections for High Mast Lighting," Advanced Infrastructure Design, Inc., 2021.
[66] B. J. Perry, Y. Guo, R. Atadero and J. W. van de Lindt, "Streamlined bridge inspection system utilizing unmanned aerial vehicles (UAVs) and machine learning," Measurement, vol. 164, p. 108048, 2020. [67] Y. LeCun, Y. Bengio and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, p. 436– 444, 2015. [68] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella and J. Schmidhuber, "Flexible, high performance convolutional neural networks for image classification," in Twenty-Second International Joint Conference on Artificial Intelligence, 2011. [69] D. Scherer, A. Muller and S. Behnke, "Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition," in 20th International Conference on Artificial Neural Networks (ICANN), 2010. [70] J. Schmidhuber, "Deep learning in neural networks: An overview," Neural Netw, vol. 61, p. 85–117, 2015. [71] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus and Y. LeCun, "Overfeat: Integrated recognition, localization and detection using convolutional networks," ArXiv, p. Prepr. ArXiv13126229, 2013. [72] S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Comput, vol. 9, no. 8, p. 1735–80, 1997. [73] F. A. Gers and E. Schmidhuber, "LSTM recurrent networks learn simple context-free and context-sensitive languages," IEEE Trans Neural Netw, vol. 12, no. 6, p. 1333–40, 2001. [74] G. Swapna, K. Soman and R. Vinayakumar, "Automated detection of diabetes using CNN and CNN-LSTM network and heart rate signals," Procedia Comput Sci, vol. 132, p. 1253– 62, 2018. [75] J. Zhao, X. Mao and L. Chen, "Speech emotion recognition using deep 1D & 2D CNN LSTM networks," Biomed Signal Process Control, vol. 47, p. 312–23, 2019. [76] S. Chang, B. Yu and M. Vetterli, "Adaptive wavelet thresholding for image denoising and compression," IEEE Trans Image Process, vol. 9, no. 9, p. 1532–46, 2000. [77] M. Zhang and B. K. Gunturk, "Multiresolution bilateral filtering for image denoising," IEEE Trans Image Process, vol. 17, no. 12, p. 2324–33, 2008. [78] J. Portilla, V. Strela, M. J. Wainwright and E. P. Simoncelli , "Image denoising using scale mixtures of Gaussians in the wavelet domain," IEEE Trans Image Process, vol. 12, no. 11, p. 1338–51, 2013. [79] R. C. Gonzalez, R. E. Woods and B. Masters, Digital image processing third edition, Upper Saddle River: Prentice Hal, 2008.
[80] A. Krizhevsky, I. Sutskever and G. E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," in Advances in Neural Information Processing Systems 25 (NIPS 2012), 2012. [81] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," arXiv:1409.1556, 2015. [82] F. Chollet, "Xception: Deep Learning With Depthwise Separable Convolutions," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [83] L. Chen, Y. Zhu, G. Papandreou, F. Schroff and H. Adam, "Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation," in European Conference on Computer Vision (ECCV), 2018. [84] T. Corpetti and O. Planchon , "Front detection on satellite images based on wavelet and evidence theory: Application to the sea breeze fronts," Remote Sensing of Environment, vol. 115, p. 306–324, 2011. [85] Q. Jiang, X. Jin, S. J. Lee and S. Yao, "A Novel Multi-Focus Image Fusion Method Based on Stationary Wavelet Transform and Local Features of Fuzzy Sets," IEEE Access, vol. 5, pp. 20286-20302, 2017. [86] L. Guo, X. Cao and L. Liu, "Dual-tree biquaternion wavelet transform and its application to color image fusion," Signal Processing , vol. 171, p. 107513, 2020. [87] E. J. Stollnitz, T. D. DeRose and D. H. Salesin, "Wavelets for computer graphics: a primer, part 1," IEEE Comput. Graphics. Appl, vol. 15, no. 3, p. 76–84, 1995. [88] G. Pajares and J. M. Cruz, "A wavelet-based image fusion tutorial," Pattern Recognition, vol. 37, no. 9, pp. 1855-1872, 2004. [89]
Y. Deng, "Deng entropy," Chaos Solitons Fractals, vol. 91, pp. 549-553, 2016.
[90] A. P. Dempster, "Upper and Lower Probabilities Induced by a Multivalued Mapping," The Annals of Mathematical Statistics, vol. 38, no. 2, pp. 325-339, 1967. [91]
G. Shafer, "A Mathematical Theory of Evidence," Princeton University Press, 1976.
[92] T. L. Fine, "Review: Glenn Shafer, A mathematical theory of evidence," The Bulletin of the American Mathematical Society, vol. 83, no. 4, pp. 667-672, 1977. [93] F. Ye, J. Chen and Y. Li, "Improvement of DS Evidence Theory for Multi-Sensor Conflicting Information," Symmetry, vol. 9, no. 5, p. 69, 2017. [94] S. Zhang, Q. Pan and H. Zhang, " A New Kind of Combination Rule of Evidence Theory," Control Decis, vol. 15, pp. 540-544, 2000. [95] C. H. Hugenholtz, K. Whitehead, O. W. Brown, T. E. Barchyn, B. J. Moorman, A. LeClair, K. Riddell and T. Hamilton, "Geomorphological mapping with a small unmanned
aircraft system (sUAS): Feature detection and accuracy assessment of a photogrammetricallyderived digital terrain model," Geomorphology, vol. 194, pp. 16-24, 2013. [96] F. Nex, "UAV photogrammetry for mapping and 3d modeling–current status and future perspectives," in International archives of the photogrammetry, remote sensing and spatial information sciences, 2010. [97] K. Whitehead, B. J. Moorman and C. H. Hugenholtz, "Low-cost, on-demand aerial photogrammetry for glaciological measurement," in Cryosphere Discussions , 7, 2013.
Swanson School of Engineering Department of Civil and Environmental Engineering IRISE Consortium 742 Benedum Hall 3700 O’Hara Street Pittsburgh, PA 15261
The information printed in this document was accurate to the best of our knowledge at the time of printing and is subject to change at any time at the University’s sole discretion. The University of Pittsburgh is an affirmative action, equal opportunity institution.