Thermal and RGB Images Work Better Together in Wind Turbine Damage Detection
- URL: http://arxiv.org/abs/2412.04114v1
- Date: Thu, 05 Dec 2024 12:32:45 GMT
- Title: Thermal and RGB Images Work Better Together in Wind Turbine Damage Detection
- Authors: Serhii Svystun, Oleksandr Melnychenko, Pavlo Radiuk, Oleg Savenko, Anatoliy Sachenko, Andrii Lysyi,
- Abstract summary: Inspection of wind turbine blades (WTBs) is crucial for ensuring their structural integrity and operational efficiency.
Traditional inspection methods can be dangerous and inefficient, prompting the use of unmanned aerial vehicles (UAVs) that access hard-to-reach areas.
We propose a multispectral image composition method that combines thermal and RGB imagery through spatial coordinate transformation.
- Score: 13.786915116688965
- License:
- Abstract: The inspection of wind turbine blades (WTBs) is crucial for ensuring their structural integrity and operational efficiency. Traditional inspection methods can be dangerous and inefficient, prompting the use of unmanned aerial vehicles (UAVs) that access hard-to-reach areas and capture high-resolution imagery. In this study, we address the challenge of enhancing defect detection on WTBs by integrating thermal and RGB images obtained from UAVs. We propose a multispectral image composition method that combines thermal and RGB imagery through spatial coordinate transformation, key point detection, binary descriptor creation, and weighted image overlay. Using a benchmark dataset of WTB images annotated for defects, we evaluated several state-of-the-art object detection models. Our results show that composite images significantly improve defect detection efficiency. Specifically, the YOLOv8 model's accuracy increased from 91% to 95%, precision from 89% to 94%, recall from 85% to 92%, and F1-score from 87% to 93%. The number of false positives decreased from 6 to 3, and missed defects reduced from 5 to 2. These findings demonstrate that integrating thermal and RGB imagery enhances defect detection on WTBs, contributing to improved maintenance and reliability.
Related papers
- Bringing RGB and IR Together: Hierarchical Multi-Modal Enhancement for Robust Transmission Line Detection [67.02804741856512]
We propose a novel Hierarchical Multi-Modal Enhancement Network (HMMEN) that integrates RGB and IR data for robust and accurate TL detection.
Our method introduces two key components: (1) a Mutual Multi-Modal Enhanced Block (MMEB), which fuses and enhances hierarchical RGB and IR feature maps in a coarse-to-fine manner, and (2) a Feature Alignment Block (FAB) that corrects misalignments between decoder outputs and IR feature maps by leveraging deformable convolutions.
arXiv Detail & Related papers (2025-01-25T06:21:06Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - The Solution for the GAIIC2024 RGB-TIR object detection Challenge [5.625794757504552]
RGB-TIR object detection aims to utilize both RGB and TIR images for complementary information during detection.
Our proposed method achieved an mAP score of 0.516 and 0.543 on A and B benchmarks respectively.
arXiv Detail & Related papers (2024-07-04T12:08:36Z) - Invisible Gas Detection: An RGB-Thermal Cross Attention Network and A New Benchmark [24.108560366345248]
We present the RGB-Thermal Cross Attention Network (RT-CAN), which employs an RGB-assisted two-stream network architecture to integrate texture information from RGB images and gas area information from thermal images.
Gas-DB is an extensive open-source gas detection database including about 1.3K well-annotated RGB-thermal images with eight variant collection scenes.
arXiv Detail & Related papers (2024-03-26T13:58:47Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - IH-ViT: Vision Transformer-based Integrated Circuit Appear-ance Defect
Detection [5.4641726517633025]
We propose an IC appearance defect detection algorithm-rithm IH-ViT.
Our proposed model takes advantage of the strengths of CNN and ViT to acquire image features from both local and global aspects.
After testing, our proposed hybrid IH-ViT model achieved 72.51% accuracy, which is 2.8% and 6.06% higher than ResNet50 and ViT models alone.
arXiv Detail & Related papers (2023-02-09T09:27:40Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Anomaly Detection in IR Images of PV Modules using Supervised
Contrastive Learning [4.409996772486956]
We train a ResNet-34 convolutional neural network with a supervised contrastive loss to detect anomalies in infrared images.
Our method converges quickly and reliably detects unknown types of anomalies making it well suited for practice.
Our work serves the community with a more realistic view on PV module fault detection using unsupervised domain adaptation.
arXiv Detail & Related papers (2021-12-06T10:42:28Z) - A Multi-Stage model based on YOLOv3 for defect detection in PV panels
based on IR and Visible Imaging by Unmanned Aerial Vehicle [65.99880594435643]
We propose a novel model to detect panel defects on aerial images captured by unmanned aerial vehicle.
The model combines detections of panels and defects to refine its accuracy.
The proposed model has been validated on two big PV plants in the south of Italy.
arXiv Detail & Related papers (2021-11-23T08:04:32Z) - Progressively Guided Alternate Refinement Network for RGB-D Salient
Object Detection [63.18846475183332]
We aim to develop an efficient and compact deep network for RGB-D salient object detection.
We propose a progressively guided alternate refinement network to refine it.
Our model outperforms existing state-of-the-art approaches by a large margin.
arXiv Detail & Related papers (2020-08-17T02:55:06Z) - Combining Visible and Infrared Spectrum Imagery using Machine Learning
for Small Unmanned Aerial System Detection [1.392250707100996]
This research work proposes combining the advantages of the LWIR and visible spectrum sensors using machine learning for vision-based detection of sUAS.
Our approach achieved a detection rate of 71.2 +- 8.3%, improving by 69% when compared to LWIR and by 30.4% when visible spectrum alone.
arXiv Detail & Related papers (2020-03-27T21:06:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.