Tunnel inspection and monitoring are essential for providing safe mobility in urban areas and transportation infrastructures. Project ABOUT is aimed to develop an advanced vision-based system that brings machine vision technologies and AI algorithms together to acquire high-resolution images from tunnel surfaces, automatically and efficiently. The data will be further processed to generate 3D models of tunnel surfaces. In addition, a state-of-the-art Deep Learning algorithm is employed for damage detection as well as object recognition from tunnel images.
The proposed tunnel inspection system consists of different hardware and software components that are selected and assembled based on the project requirements. The key questions addressed in this project are as follows:
The main components of the proposed system are cameras for industrial image processing, LED strobes and control, computer and storage unit. All subsystems are installed on a van and are time synchronized via the control unit. The intended operating speed is about 60-65 km/h, which is suitable for free flow traffic and high speed surveillance with minimal motion blur in the final images. The captured images are processed using photogrammetry software, such as Agisoft Metashape or Pix4Dmapper, to generate 3D point clouds and meshes. In addition, two different training datasets are manually generated for damage detection and object recognition tasks. The captured tunnel images are fed into CNNs (e.g. Deeplab V3 +), which is pretrained based on the created training datasets to detect different types of damage such as cracks, spalling, rust, as well as different tunnel objects such as signs, lights, cables, etc.
Within the project, a measurement and evaluation method for the precise digitization of the tunnel surfaces with a high degree of automation was developed. To achieve the targeted inspection and monitoring of the tunnel surface, images and generated 3D point clouds are analyzed to detect various types of deformations and changes, in particular cracks, spalls, depth jumps and rust plumes, both geometrically and semantically. The automated photogrammetric evaluation of images acquired in the tunnel (bundle orientation for georeferencing and generation of dense point clouds by image mapping), which was designed in the project, delivers the targeted relative coordinate accuracy with respect to position and depth of approx. 1 mm. Only sections of the tunnel surfaces that were completely textureless remain unconsidered in the image-based procedures. Using deep learning methods, the automatic object detection reliably detects not only cracks, spalling, etc. for damage documentation, but also all equipment features, in particular signs, markings, lane signals, lighting equipment, hydrants, loudspeakers, etc., with high quality. A recognition rate of nearly 80% is achieved. Since with Deep Learning approaches the recognition rate usually increases with increasing training data, the scope of the training data sets can also be expanded with each additionally processed tunnel, so that a recognition level of greater than 90% should be achieved in the future. The designed detection speed of up to 65 km/h is fully considered in the system components (exposure time of the cameras, pulse width of the LED flashlight, data rate of interfaces and data memory, etc.). Due to Corona-related limitations, not all investigations could be carried out to the planned scope during the project. Experimental proof of the general functionality of the overall tunnel inventory system could, however, be provided on the basis of extensive tests and the tunnel experiments that were carried out. In the future, it should also be possible to integrate the main results (3D models, damage maps and object classification maps of tunnel surfaces) into a BIM system (Building Information Modeling).
|Management||Prof. Dr. Gerrit Austen, Prof. Dr. Michael Hahn (Deputy)|
|Partner||Viscan Solutions GmbH|
|Project e-mail address||Gerrit.Austen(at)hft-stuttgart.de|
|Funding||Federal Ministry for Economic Affairs and Energy (BMWi)|
|Call for proposal||Central Innovation Programme for Medium-Sized Enterprises (ZIM) - Cooperation Project|
|Duration||06.05.2019 – 30.04.2021 extended until 31.07.2021|
|Name and position||Field||Email and phone||Room|
|Remote Sensing Studios (RSS), Master Photogrammetry and Geoinformatics (PG)|