Event-based Civil Infrastructure Visual Defect Detection: ev-CIVIL Dataset and Benchmark
- URL: http://arxiv.org/abs/2504.05679v1
- Date: Tue, 08 Apr 2025 04:44:33 GMT
- Title: Event-based Civil Infrastructure Visual Defect Detection: ev-CIVIL Dataset and Benchmark
- Authors: Udayanga G. W. K. N. Gamage, Xuanni Huo, Luca Zanatta, T Delbruck, Cesar Cadena, Matteo Fumagalli, Silvia Tolu,
- Abstract summary: Small Unmanned Aerial Vehicle (UAV) based visual inspections are an efficient alternative to manual methods for examining civil structural defects.<n>Traditional frame-based cameras, widely used in UAV-based inspections, often struggle to capture defects under low or dynamic lighting conditions.<n>This study introduces the first event-based civil infrastructure defect detection dataset, capturing defective surfaces as atemporal event stream using Vision Sensors (DVS)<n>The dataset focuses on two types of defects: cracks and spalling, and includes data from both field and laboratory environments.
- Score: 6.27237464820498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Small Unmanned Aerial Vehicle (UAV) based visual inspections are a more efficient alternative to manual methods for examining civil structural defects, offering safe access to hazardous areas and significant cost savings by reducing labor requirements. However, traditional frame-based cameras, widely used in UAV-based inspections, often struggle to capture defects under low or dynamic lighting conditions. In contrast, Dynamic Vision Sensors (DVS), or event-based cameras, excel in such scenarios by minimizing motion blur, enhancing power efficiency, and maintaining high-quality imaging across diverse lighting conditions without saturation or information loss. Despite these advantages, existing research lacks studies exploring the feasibility of using DVS for detecting civil structural defects.Moreover, there is no dedicated event-based dataset tailored for this purpose. Addressing this gap, this study introduces the first event-based civil infrastructure defect detection dataset, capturing defective surfaces as a spatio-temporal event stream using DVS.In addition to event-based data, the dataset includes grayscale intensity image frames captured simultaneously using an Active Pixel Sensor (APS). Both data types were collected using the DAVIS346 camera, which integrates DVS and APS sensors.The dataset focuses on two types of defects: cracks and spalling, and includes data from both field and laboratory environments. The field dataset comprises 318 recording sequences,documenting 458 distinct cracks and 121 distinct spalling instances.The laboratory dataset includes 362 recording sequences, covering 220 distinct cracks and 308 spalling instances.Four realtime object detection models were evaluated on it to validate the dataset effectiveness.The results demonstrate the dataset robustness in enabling accurate defect detection and classification,even under challenging lighting conditions.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - DailyDVS-200: A Comprehensive Benchmark Dataset for Event-Based Action Recognition [51.96660522869841]
DailyDVS-200 is a benchmark dataset tailored for the event-based action recognition community.
It covers 200 action categories across real-world scenarios, recorded by 47 participants, and comprises more than 22,000 event sequences.
DailyDVS-200 is annotated with 14 attributes, ensuring a detailed characterization of the recorded actions.
arXiv Detail & Related papers (2024-07-06T15:25:10Z) - Source-free Domain Adaptation for Video Object Detection Under Adverse Image Conditions [0.0]
When deploying pre-trained video object detectors in real-world scenarios, the domain gap between training and testing data often leads to performance degradation.
We propose a simple yet effective source-free domain adaptation (SFDA) method for video object detection (VOD)
Specifically, we aim to improve the performance of the one-stage VOD method, YOLOV, under adverse image conditions, including noise, air turbulence, and haze.
arXiv Detail & Related papers (2024-04-23T17:39:06Z) - Imagery Dataset for Condition Monitoring of Synthetic Fibre Ropes [0.0]
This dataset comprises a total of 6,942 raw images representing both normal and defective SFRs.
The dataset serves as a resource to support computer vision applications, including object detection, classification, and segmentation.
The aim of generating this dataset is to assist in the development of automated defect detection systems.
arXiv Detail & Related papers (2023-09-29T08:42:44Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - One-class Damage Detector Using Deeper Fully-Convolutional Data
Descriptions for Civil Application [0.0]
One-class damage detection approach has an advantage in that normal images can be used to optimize model parameters.
We propose a civil-purpose application for automating one-class damage detection reproducing a fully convolutional data description (FCDD) as a baseline model.
arXiv Detail & Related papers (2023-03-03T06:27:15Z) - An Outlier Exposure Approach to Improve Visual Anomaly Detection
Performance for Mobile Robots [76.36017224414523]
We consider the problem of building visual anomaly detection systems for mobile robots.
Standard anomaly detection models are trained using large datasets composed only of non-anomalous data.
We tackle the problem of exploiting these data to improve the performance of a Real-NVP anomaly detection model.
arXiv Detail & Related papers (2022-09-20T15:18:13Z) - A hierarchical semantic segmentation framework for computer vision-based bridge damage detection [3.8999448636733516]
Computer vision-based damage detection using remote cameras and unmanned aerial vehicles (UAVs) enables efficient and low-cost bridge health monitoring.<n>This paper introduces a semantic segmentation framework that imposes the hierarchical semantic relationship between component category and damage types.<n>In this way, the damage detection model could focus on learning features from possible damaged regions only and avoid the effects of other irrelevant regions.
arXiv Detail & Related papers (2022-07-18T18:42:54Z) - Multi Visual Modality Fall Detection Dataset [4.00152916049695]
Falls are one of the leading cause of injury-related deaths among the elderly worldwide.
Effective detection of falls can reduce the risk of complications and injuries.
Video cameras provide a passive alternative; however, regular RGB cameras are impacted by changing lighting conditions and privacy concerns.
arXiv Detail & Related papers (2022-06-25T21:54:26Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.