Automated Optical Inspection of FAST's Reflector Surface using Drones
and Computer Vision
- URL: http://arxiv.org/abs/2212.09039v1
- Date: Sun, 18 Dec 2022 08:34:05 GMT
- Title: Automated Optical Inspection of FAST's Reflector Surface using Drones
and Computer Vision
- Authors: Jianan Li, Shenwang Jiang, Liqiang Song, Peiran Peng, Feng Mu, Hui Li,
Peng Jiang, Tingfa Xu
- Abstract summary: The Five-hundred-meter Aperture Spherical radio Telescope (FAST) is the world's largest single-dish radio telescope.
FAST's large reflecting surface achieves unprecedented sensitivity but is prone to damage, such as dents and holes, caused by naturally-occurring falling objects.
This work makes the first step towards automating the inspection of FAST by integrating deep-learning techniques with drone technology.
Our AI-powered drone-based automated inspection is time-efficient, reliable, and has good accessibility, which guarantees the long-term and stable operation of FAST.
- Score: 15.563799757564663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Five-hundred-meter Aperture Spherical radio Telescope (FAST) is the
world's largest single-dish radio telescope. Its large reflecting surface
achieves unprecedented sensitivity but is prone to damage, such as dents and
holes, caused by naturally-occurring falling objects. Hence, the timely and
accurate detection of surface defects is crucial for FAST's stable operation.
Conventional manual inspection involves human inspectors climbing up and
examining the large surface visually, a time-consuming and potentially
unreliable process. To accelerate the inspection process and increase its
accuracy, this work makes the first step towards automating the inspection of
FAST by integrating deep-learning techniques with drone technology. First, a
drone flies over the surface along a predetermined route. Since surface defects
significantly vary in scale and show high inter-class similarity, directly
applying existing deep detectors to detect defects on the drone imagery is
highly prone to missing and misidentifying defects. As a remedy, we introduce
cross-fusion, a dedicated plug-in operation for deep detectors that enables the
adaptive fusion of multi-level features in a point-wise selective fashion,
depending on local defect patterns. Consequently, strong semantics and
fine-grained details are dynamically fused at different positions to support
the accurate detection of defects of various scales and types. Our AI-powered
drone-based automated inspection is time-efficient, reliable, and has good
accessibility, which guarantees the long-term and stable operation of FAST.
Related papers
- Performance Assessment of Feature Detection Methods for 2-D FS Sonar Imagery [11.23455335391121]
Key challenges include non-uniform lighting and poor visibility in turbid environments.
High-frequency forward-look sonar cameras address these issues.
We evaluate a number of feature detectors using real sonar images from five different sonar devices.
arXiv Detail & Related papers (2024-09-11T04:35:07Z) - A Study on Unsupervised Anomaly Detection and Defect Localization using Generative Model in Ultrasonic Non-Destructive Testing [0.0]
Deterioration of artificial materials used in structures has become a serious social issue.
Laser ultrasonic visualization testing (LUVT) allows the visualization of ultrasonic propagation.
We propose a method for automated LUVT inspection using an anomaly detection approach.
arXiv Detail & Related papers (2024-05-26T14:14:35Z) - C2FDrone: Coarse-to-Fine Drone-to-Drone Detection using Vision Transformer Networks [23.133250476580038]
A vision-based drone-to-drone detection system is crucial for various applications like collision avoidance, countering hostile drones, and search-and-rescue operations.
detecting drones presents unique challenges, including small object sizes, distortion, and real-time processing requirements.
We propose a novel coarse-to-fine detection strategy based on vision transformers.
arXiv Detail & Related papers (2024-04-30T05:51:21Z) - A Novel Approach for Defect Detection of Wind Turbine Blade Using
Virtual Reality and Deep Learning [0.0]
We develop virtual models of wind turbines to synthesize the near-reality images for four types of common defects.
In the second step, a customized U-Net architecture is trained to classify and segment the defect in turbine blades.
The proposed methodology provides reasonable defect detection accuracy, making it suitable for autonomous and remote inspection through aerial vehicles.
arXiv Detail & Related papers (2023-12-30T13:58:50Z) - Security Fence Inspection at Airports Using Object Detection [4.373803477995854]
Airport security fences are commonly used, but they require regular inspection to detect damages.
The aim is to automatically inspect the fence for damage with the help of an autonomous robot.
In this work, we explore object detection methods to address the fence inspection task and localize various types of damages.
arXiv Detail & Related papers (2023-11-18T21:59:48Z) - Global Context Aggregation Network for Lightweight Saliency Detection of
Surface Defects [70.48554424894728]
We develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure.
First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module.
The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T06:19:11Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Early Recall, Late Precision: Multi-Robot Semantic Object Mapping under
Operational Constraints in Perceptually-Degraded Environments [47.917640567924174]
We propose the Early Recall, Late Precision (EaRLaP) semantic object mapping pipeline to solve this problem.
EaRLaP was used by Team CoSTAR in DARPA Subterranean Challenge, where it successfully detected all the artifacts encountered by the team of robots.
We will discuss these results and performance of the EaRLaP on various datasets.
arXiv Detail & Related papers (2022-06-21T01:11:42Z) - A Multi-Stage model based on YOLOv3 for defect detection in PV panels
based on IR and Visible Imaging by Unmanned Aerial Vehicle [65.99880594435643]
We propose a novel model to detect panel defects on aerial images captured by unmanned aerial vehicle.
The model combines detections of panels and defects to refine its accuracy.
The proposed model has been validated on two big PV plants in the south of Italy.
arXiv Detail & Related papers (2021-11-23T08:04:32Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.