Blossomrust Studio II MRAC Master in Robotics and Advanced Construction IaaC Institute for Advanced Architecture of Catalonia
Students Tutors Aldo Sollazzo Daniel Serrano
Abdelrahman Koura Alexandros Varvantakis Cedric Droogmans Luis Jayme Buerba
Concept
INDUSTRY
1
RUST
DRONE
1. Motivation
2. Existing Market
3. Strategy
4. Drone Navigation
6. Detection
7. Results
Abstract
Having in mind the necessity and diďŹƒculties of detecting rust in complex environments in a heavy industry environment our program proposes a solution for automated rust detection and mapping. This project demonstrates the possibilities for detecting rust inside factories with the use of drones and machine learning.
2
COST OF CORROSION INFRASTRUCTURE INDUSTRY
CA Chevron oil refinery plant 2012 oil spill due to rusty pipes.
3
Some of the effects of corrosion include a
$276 billion is spent on corrosion-related
significant deterioration of structural elements
issues in commercial, residential and
as well as risk of catastrophic equipment failures.
transportation sectors.
3% of the U.S GDP
TYPES AND GRADES OF RUST
Chemical or electrochemical reaction
Crevice corrosion
Uniform Attack
Rust Grade A
Rust Grade B
Rust Grade C
Not water sulphide stress corrosion hydrogen sulphide
Stress corrosion cracking
Rust Grade D
Situation
Contact between two metal surfaces
4
STEEL LIFE CYCLE EARLY RUST DETECTION
Fields of opportunity GOAL
Infrastructure Detection Structural Effects of Rust 1. Loss of Strength 2. Fatigue 3. Reduced Bond Strength 4. Limited Ductility 5. Reduced Shear Capacity
today
Aesthetically Unpleasant
later
Structural Stability
Early vs Late Detection
5
Rovers
Drones
Fixed wing drones
RGB Camera Thermal Camera Infrared Camera
LIDAR
Modular
Market survey
U.S. NAVY
6
OVERVIEW STEPS
Field
Heavy Industries
Inspection Method Autonomous drones
Autonomous drone
7
Detection Method Computer Vision & Neural Networks
Deliveries Rust position - Expansion & user interface
DETECTION STEPS PHYSICAL Operation
Mapping Flight
2
Rust Hot- spots identiďŹ cation
3
Scanning the rest of the building
4
Detect Rust
5
3d Interface for User
On Site Operation
1
2
3
4
Detection
1
Identifying Rust 8
DATA PROCESSING STEPS DIGITAL Mapping Flight
Pointcloud
Point Cloud segmentation
Mission Planner
Rust Hotspots Mapping
Rust Detection Mapping
Rust expansion
Workflow 1
1 9
Mapping Flight
2
2
Rust Hot- spots identification
3
3
Scanning the rest of the building
4
4
5
Detect Rust
5
3d Interface for User
RUST HOT-SPOT IDENTIFICATION
Metal Roofing
Atmospheric corrosion Exposed fasteners Factors ●Sulfur Dioxide. ●Chlorides (high salt content in air). ●Ambient. ●Temperature. ●Humidity. ●Pollutants.
Soil corrosion Factors ●Humidity. ●Aeration. ●Concentration of hydrogen ions(pH). ●Dissolved salts. ●Electrical conductivity.
Contact corrosion Factors ●Difference in electrode potential of chemical elements of the materials. ●Duration of contact. ●Load on structures
Exposed steel rods
Ground Connectors
Vertical bonds connections Two different metals
Point cloud generation
PHYSICAL
10
MAPPING FLIGHT
Manual Navigation
11
1
2 1.
2.
3.
4.
Point cloud generation
Planes / Cylinder Segmentation
Raw
12
GEOMETRICAL SEGMENTATION RUST HOT-SPOTS
4. 1.
Plane Segmentation
6. 2.
13
Cylinder Segmentation
5.
CONSTRUCTION SITE
1st Plane
2nd Plane
Cylinders
1
2
3
4
5
Point cloud segmentation
RUST HOT-SPOTS
14
NAVIGATION STRATEGY
Mission Planner
Drone Control
Structural Inspection Path Planning
Bebop_autonomy
ETHZ Dr. Kostas Alexis For the autonomous inspection of the "Hotspots" a Mission Planner is needed to generate a path that allows the drone to take pictures of every surface of the object in question. Also a controller is needed to send continuous commands to the drone to ow this path. The mission planner that got implemented is the "Structural Inspection Path Planner" developed by Dr. Kostas Alexis and his team at ETHZ. The controller got developed by the Autonomy Lab of the Simon Fraser University with additional scripts of Soroush Garivani of IaaC and our own research.
15
Autonomy Lab of Simon Fraser University by Mani Monajjemi
Octomap
Json file
Location Type Size Mesh
Hotspots
Structural Inspection Path Planning (ETHZ - Dr. Kostas Alexis)
Pointcloud
Json file reader
●
Load the mesh model
●
Viewpoint Sampler (create set op points from which all faces of the mesh are covered with the field of view of the camera) - Art Gallery Problem
●
Create collision free paths through viewpoints
●
Test different paths with Traveling Salesman Problem
●
Loop through options to find best or close to best configuration
Sort Location
Art Gallery Problem
Location Type Size Mesh
Mesh
Downsample mesh
Uniform mesh
Traveling Salesman Problem
Navigation
Environment
This algorithm is split into two parts first points get generated from which each face of the mesh is visible and in a second step these points get connected in an efficient way. At the core of this 2 step process are two math problems at the the “Art Gallery Problem” and the “Travelling salesman Problem”.
Link hotspots together
Waypoint file
16
STRUCTURAL INSPECTION PATH PLANNING ART GALLERY PROBLEM
On the left the subdivision of 3D space,. Each voxel gets segmented into 8 smaller voxels thus increasing the resolution of the representation.
To take into account the existing environment in which the inspection is going to take place. The point cloud from chapter (X.x) needs to be processed into a octomap. A octomap is a 3D mapping framework based on Octrees. An octree is a data structure to represent 3D space. Converting the pointcloud first into a Mesh and then into a Octomap. We get a voxelized representation of the environment. Where each voxel can be either occupied or empty. Is a voxel occupied it means there is some sort of structure inside this space. Is it empty it means it is free for the drone to fly. For this process we used Agisoft (https://www.agisoft.com) and Binvox (https://www.patrickmin.com/binvox/).
17
Environment
ART GALLERY PROBLEM
Observable space
Triangulate
The Art Gallery Problem is a visibility problem originating from the need of galleries and museums to minimize the amount of guards. To solve the problem one first triangulates the space, and then proceeds to assigning one of three colors to each corner in a way that the same color is never next to itself. Then one counts the amount of each each color and chooses the one with the least amount of points. There are different approaches to triangulate and to tri-color which would be interesting for further studies.
Tri-Coloring
Solution
Source: https://en.wikipedia.org/wiki/Art_gallery_problem DoCp: https://www.youtube.com/watch?v=2iBR8v2i0pM https://pjdelta.wordpress.com/ https://discuss.inventables.com/t/traveling-salesman-optimization/66124
Mission Planner
Example of easiest triangulation and tricolring more advanced algorithms are used for the pathplanner Pathplanner takes field of view of the camera into account
18
STRUCTURAL INSPECTION PATH PLANNING TRAVELING SALESMAN PROBLEM
Every connection has a cost
42 + 12+34+20 = 108 "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" The easiest way to solve the traveling salesman problem is to assign a cost for each connection between each point. Then iterating through all possible paths connecting all the points looking for the path with the lowest cost. This process becomes increasingly more difficult the more points one wants to connect.
35 + 12 + 30 + 20 = 97
An interesting further study in our project would be assigning cost for each transition between paths. This because a drone has specific flight properties making sharp turns cost more time and energy where as smooth curves are more efficient. Looking for the route that has the minimum cost
19
https://en.wikipedia.org/wiki/Travelling_salesman_problem https://en.wikipedia.org/wiki/NP-hardness
POINT CLOUD - MESH - OCTOMAP / OBSTACLE
PointCloud
Mesh
Octomap
Mission Planner
Mesh
20
ELEMENT - POINTCLOUD
Structural Inspection Path Planning
Final inspection path
UNIFORM MESH - PATH
To speed up the inspection we already segmented the environment into geometries that are more likely to have rust see chapter (x.x). These geometries we call “hotspots” for example beams, columns, connections etc. From the segmentation we get a point cloud of said hotspot. This point cloud we need to process into a Mesh. For the algorithm to work efficient we need to set up some requirements for the mesh. The triangular faces of the mesh need to have sides of similar length. And must be inspectable from multiple points in its entirety. This leads to more option for the algorithm to iterate through. To achieve this we tried different process like ACVD (https://www.creatis.insa-lyon.fr/site7/en/acvd) and Kangaroo in Grasshopper. Both gave good results. For us Grasshopper was easier to use.
21
Segmented Pointcloud
Mesh
Uniform Mesh downsampled
DRONE CONTROL
bebop_autonomy
BEBOP AUTONOMY
Reads keyboard commands: T takeoff N navigate L land S stop ESC terminate script
Bebop_autonomy is a ROS driver developed by the Autonomy Lab of the Simon Fraser University. It is a driver for the Parrot Bebop drones. It enables the user to connect directly to the drone to send and receive data. In addition to the driver Soroush Garivani added a Position Control script to be able to send commands to the drone to change its position. In our research we added functionality to the Position Control. The script manly receives two inputs one being a waypoint file which is the path we generated with the mission planner. The other is the position of the drone. This we read from the odometry of the drone. At the heart of the control is the Position controller which compares the current position of the drone to the target position. From the error velocity commands are calculated using a PID controller and send to the drone.
Statemanager
INPUTS: Waypoint file
Odometry
OUTPUT: Landed Taking off Hovering Navigating Landing
Waypoint reader
Reads waypoints one by one Sets tolerance 0.2
Odometry update
Position Control
PID
Compares current position to set position
Proportional Integral derivative controller
cmd_vel Speed commands for bebop
Mission Planner
key_reader
Reads current position of drone
22
PID CONTROLLER
Calculate velocity
Calculate error Target Position Waypoint file
X Y Z YAW
Send Commands
PID Controller
Bebop autonomy
Proportional Integral derivative controller
Current Position Odometry
Bebop
PID controller Proportional–Integral–Derivative controller is a control loop which is commonly used in different industrial applications.
Target Position
As explained before it calculates the difference between the current position and the target position. This error gets translated into velocity commands using a proportional, a integral and a derivative parameter. These parameters need tuning for which it is best to have outside control. In case of the drone a 3D tracking system like HTC Vive is helpful. Experiments showing the influence of these parameters are shown on the next page
Current Position
P
23
I
D
I: 0.0
D: 0.0
P: 1.5
I: 0.0
D: 0.05
P: 1.5
I: 0.01
D: 0.05
P: 1.5
I: 0.01
D: 0.05
Mission Planner
P: 1.5
24
NEURAL NETWORK COMPUTER VISION - DEEP LEARNING
Detecting corrosion and rust manually can be extremely time and effort intensive, and even in some cases dangerous. Also, there are problems in the consistency of estimates – the defects identified vary by the skill of inspector. In infrastructure spectrum there are some common scenarios where corrosion detection is of critical importance. The manual process of inspection is partly eliminated by having robotic arms or drones taking pictures of components from various angles, and then having an inspector go through these images to determine a rusted component, that needs repair. Even this process can be quite tedious and costly, as engineers have to go through many such images for hours together, to determine the condition of the component. While computer vision techniques have been used with limited success, in the past, in detecting corrosion from images, the advent of Deep Learning has opened up a whole new possibility, which could lead to accurate detection of corrosion with little or no manual intervention. To be more accurate, a more complex feature-based model will be needed, which will consider the texture features as well. Applying this to rust detection can be quite challenging since rust does not have a well-defined shape or color. To be successful with Computer Vision techniques, one needs to bring in complex segmentation, classification and feature measures. This is where Deep Learning comes in. The fundamental aspect of Deep Learning is that it learns complex features on its own, without someone specifying the features explicitly. This means, that by using aDeep Learning model will enable us to extract the features of rust automatically, as we train the model with rusted components.
https://www.youtube.com/watch?v=rZQImBvZoHk
25
Neural Network
SEMANTIC SEGMENTATION
Image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as image objects). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. In our case semantic segmentation was used to accurately identify rust position in space . MaskR CNN was the tool and by using a large dataset of rust images and corrosion textures we trained a model able to accurately identify and isolate rust in images.
https://www.youtube.com/watch?v=rZQImBvZoHk
26
NEURAL NETWORK RUST DETECTION - BINARY CNN Convolutional base (pre-trained)
Network
VGG16 Keras Model
Keras
Matplotlib
Liertools
os
Seaborn
Numpy
Sklearn
Tensoro w
RMSProp optimizer and binary cross-entropy loss
Lr- weights
Train the model for 1300 epochs
parameter how it changes in every evolution Training
Convolutional Neural Network model (pre-trained)
Anaconda
Image Classification Scaling ImageDataGenerator Image Augmentation
600 images of rust / 150 no rust images to test the system
API (Keras ImageDataGenerator)
Confusion Matrix ClassiďŹ cation Reports
Evaluation
precision
norust rust accuracy macro avg weighted avg
Test Rust
No Rust
Train Rust
27
No Rust
1.00 0.86
0.93 0.94
recall
0.89 1.00
f1-score support 0.94 0.92
0.93 15 0.94 0.93 0.93 0.93
9 6
15 15
NEURAL NETWORK RUST DETECTION - LOCALISATION
TensorFlow Model Zoo
Python
Labeling
Keras
Train the model for 1200 epochs
Liertools Seaborn Sklearn Matplotlib os Numpy
Lr- weights
parameter how it changes in every evolution
Tensorflow
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.068 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.183 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.028 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.075 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ]TensorBoard = 0.169 view of localization Loss Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ]
Accuracy of the model
Neural Network
Anaconda
Rust Localization Model
= 0.100
28
REGION-BASED SEGMENTATION PIXEL VALUES Region-based Segmentation Using threshold pixel value.
Area of rust through global threshold. 54,643 black pixels, out of 124416 pixels in total.
Applying local threshold Classify rust in Four Different segments of rust
Edge Detection Segmentation Different grayscale values (pixel values).
Identify horizontal edges
29
Identify Vertical edges.
Detecting Both.
When there is no significant grayscale difference, or there is an overlap of the grayscale pixel values, it becomes very difficult to get accurate segments.
NEURAL NETWORK LOCALISATION - SEGMENTATION E.g. 2
E.g. 3
E.g. 3
Neural Network
E.g. 1 E.g. 1
30
IMAGE SLICING 1.
2.
3.
4.
5.
6.
img01.png
img02.png
img03.png
img04.png
img05.png
img06.png
img15.png
img16.png
img17.png
img18.png
img19.png
img29.png
img30.png
img31.png
img32.png
img43.png
img44.png
img45.png
img57.png
img58.png
img71.png
img72.png
7.
8.
9.
10.
11.
12.
13.
img07.png
img08.png
img09.png
img10.png
img11.png
img12.png
img13.png
img14.png
img20.png
img21.png
img22.png
img23.png
img24.png
img25.png
img26.png
img27.png
img28.png
img33.png
img34.png
img35.png
img36.png
img37.png
img38.png
img39.png
img40.png
img41.png
img42.png
img46.png
img47.png
img48.png
img49.png
img50.png
img51.png
img52.png
img53.png
img54.png
img55.png
img56.png
img59.png
img60.png
img61.png
img62.png
img63.png
img64.png
img65.png
img66.png
img67.png
img68.png
img69.png
img70.png
img73.png
img74.png
img75.png
img76.png
img77.png
img78.png
img79.png
img80.png
img81.png
img82.png
img83.png
img84.png
img86.png img88.png img85.png img89.png img87.png Slicing Images and then feed to network for more precise detection.
img90.png
img91.png
img92.png
img93.png
img94.png
img95.png
img96.png
img97.png
img98.png
14.
1.
Image Slicing
2.
3.
QGIS
4.
Divide image
5.
Localisation
6.
7.
31
Analysing Areas with Rust and overlay information
Filtering images through Neural Network - Rebuild pointcloud
Neural Network
3d Scanning of a Construction Site
32
FILTERING IMAGES THROUGH NEURAL NETWORK REBUILD POINTCLOUD Close-ups
Analysing Areas with Rust
1.
2.
3. 1.
3. Segmented Areas
Localisation
4.
4.
5.
2.
33
27
5.
1:100 Elevation
Rust Detection
Rust Detection
Rust Detection
Rust Hotspots
Beam 01
Beam 02
Beam 03
37
38
Rust expansion
GENERATIVE ADVERSARIAL NETWORK
Material Evolution
100 Epochs
150 Epochs
200 Epochs
250 Epochs
350 Epochs
300 Epochs Perlin Noise
Threshold
400 Epochs
450 Epochs
Adding Detail
Looped rendering 200 iterations
Step Size per epoch
Embedding Step size into different versions of the same material Creating Material
39
RUST EXPANSION Generative adversarial networks are a type of deep learning - based generative model. They are remarkably effective at generating both high-quality and large synthetic images in a range of problem domains which in our case is the prediction of rust expansion,Instead of being trained directly, the generator models are trained by a second model, called the discriminator, that learns to differentiate real images from fake or generated images.The model we used was trained for 1000 epochs generating an image or rust expansion per epoch.By then counting the difference in the pixels we estimated the rust growth by generating an equal amount of textures we simulated the growth of corrosion on a generic surface.
Step per Epoch 0
Generation
Perlin Noise
500
Threshold
Adding Detail
Rust expansion & UI
STUDY
Material samples 40
USER INTERFACE TYPE SELECTION
Dataset: Pointcloud Clusters Meshes Rust Amount of Rust 41
Possible implementations: Repair planing Cost calculation for repair Rust expansion
USER INTERFACE
User Interface
RUST SELECTION
42
USER INTERFACE VIEWPORTS
43
Constant autonomous Inspection Production Maintenance Quality Safety
Data logging Material Quality of Material Quantity of Material Coordinate production Coordinate repair Predict Corrosion Expansion
Blossomrust is a disruptive technology company, pioneering innovation in the development and delivery of autonomic and cognitive rust detection services. From our inception, our founders recognized that automation through digital labor will shape the future of any business operations. Inspired by the great minds of the past, we relish the challenge of making what others believe impossible a reality.
Barcelona, Spain
Blossomrust is a project of IaaC, Institute for Advanced Architecture of Catalonia developed at Masters of Robotics and Advanced Construction (MRAC) in 2019-2020 by Students: Abdelrahman Koura, Alexandros Varvantakis, Cedric Droogmans, Luis Jayme Buerba Faculty: Aldo Sollazzo (IAAC, NOUMENA) - Daniel Serrano (EURECAT Technology Centre of Catalonia)