J-DDL: Surface Damage Detection and Localization System for Fighter Aircraft
- URL: http://arxiv.org/abs/2506.10505v1
- Date: Thu, 12 Jun 2025 09:05:35 GMT
- Title: J-DDL: Surface Damage Detection and Localization System for Fighter Aircraft
- Authors: Jin Huang, Mingqiang Wei, Zikuan Li, Hangyu Qu, Wei Zhao, Xinyu Bai,
- Abstract summary: We propose a smart surface damage detection and localization system for fighter aircraft, termed J-DDL.<n>J-DDL integrates 2D images and 3D point clouds of the entire aircraft surface, captured using a combined system of laser scanners and cameras.<n>Key innovations include lightweight Fasternet blocks for efficient feature extraction, an optimized neck architecture, and the introduction of a novel loss function, Inner-CIOU.
- Score: 18.53607676786071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring the safety and extended operational life of fighter aircraft necessitates frequent and exhaustive inspections. While surface defect detection is feasible for human inspectors, manual methods face critical limitations in scalability, efficiency, and consistency due to the vast surface area, structural complexity, and operational demands of aircraft maintenance. We propose a smart surface damage detection and localization system for fighter aircraft, termed J-DDL. J-DDL integrates 2D images and 3D point clouds of the entire aircraft surface, captured using a combined system of laser scanners and cameras, to achieve precise damage detection and localization. Central to our system is a novel damage detection network built on the YOLO architecture, specifically optimized for identifying surface defects in 2D aircraft images. Key innovations include lightweight Fasternet blocks for efficient feature extraction, an optimized neck architecture incorporating Efficient Multiscale Attention (EMA) modules for superior feature aggregation, and the introduction of a novel loss function, Inner-CIOU, to enhance detection accuracy. After detecting damage in 2D images, the system maps the identified anomalies onto corresponding 3D point clouds, enabling accurate 3D localization of defects across the aircraft surface. Our J-DDL not only streamlines the inspection process but also ensures more comprehensive and detailed coverage of large and complex aircraft exteriors. To facilitate further advancements in this domain, we have developed the first publicly available dataset specifically focused on aircraft damage. Experimental evaluations validate the effectiveness of our framework, underscoring its potential to significantly advance automated aircraft inspection technologies.
Related papers
- Pillar-Voxel Fusion Network for 3D Object Detection in Airborne Hyperspectral Point Clouds [35.24778377226701]
We propose PiV-A HPC, a 3D object detection network for airborne HPCs.<n>We first develop a pillar-voxel dual-branch encoder, where the former captures spectral and vertical structural features from HPCs to overcome spectral distortion.<n>A multi-level feature fusion mechanism is devised to enhance information interaction between the two branches.
arXiv Detail & Related papers (2025-04-13T10:13:48Z) - PlanarNeRF: Online Learning of Planar Primitives with Neural Radiance Fields [33.86567333264051]
PlanarNeRF is a novel framework capable of detecting dense 3D planes through online learning.<n>It enhances 3D plane detection with concurrent appearance and geometry knowledge.<n>A lightweight plane fitting module is proposed to estimate plane parameters.
arXiv Detail & Related papers (2023-12-30T03:48:22Z) - Joint object detection and re-identification for 3D obstacle
multi-camera systems [47.87501281561605]
This research paper introduces a novel modification to an object detection network that uses camera and lidar information.
It incorporates an additional branch designed for the task of re-identifying objects across adjacent cameras within the same vehicle.
The results underscore the superiority of this method over traditional Non-Maximum Suppression (NMS) techniques.
arXiv Detail & Related papers (2023-10-09T15:16:35Z) - 3D vision-based structural masonry damage detection [6.442649108177674]
We present a 3D vision-based methodology for accurate masonry damage detection.
First, images of the masonry specimens are collected to generate a 3D point cloud.
Second, 3D point clouds processing methods are developed to evaluate the masonry damage.
arXiv Detail & Related papers (2023-08-31T00:48:05Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - High-Throughput and Accurate 3D Scanning of Cattle Using Time-of-Flight
Sensors and Deep Learning [1.2599533416395765]
We introduce a high throughput 3D scanning solution specifically designed to measure cattle phenotypes.
This scanner leverages an array of depth sensors, i.e. time-of-flight (Tof) sensors, each governed by dedicated embedded devices.
The system excels at generating high-fidelity 3D point clouds, thus facilitating an accurate mesh that faithfully reconstructs the cattle geometry on the fly.
arXiv Detail & Related papers (2023-08-07T18:15:03Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Hierarchical Point Attention for Indoor 3D Object Detection [111.04397308495618]
This work proposes two novel attention operations as generic hierarchical designs for point-based transformer detectors.
First, we propose Multi-Scale Attention (MS-A) that builds multi-scale tokens from a single-scale input feature to enable more fine-grained feature learning.
Second, we propose Size-Adaptive Local Attention (Local-A) with adaptive attention regions for localized feature aggregation within bounding box proposals.
arXiv Detail & Related papers (2023-01-06T18:52:12Z) - Aerial Monocular 3D Object Detection [67.20369963664314]
DVDET is proposed to achieve aerial monocular 3D object detection in both the 2D image space and the 3D physical space.<n>To address the severe view deformation issue, we propose a novel trainable geo-deformable transformation module.<n>To encourage more researchers to investigate this area, we will release the dataset and related code.
arXiv Detail & Related papers (2022-08-08T08:32:56Z) - Attentional Feature Refinement and Alignment Network for Aircraft
Detection in SAR Imagery [24.004052923372548]
Aircraft detection in Synthetic Aperture Radar (SAR) imagery is a challenging task due to aircraft's discrete appearance, obvious intraclass variation, small size and serious background's interference.
In this paper, a single-shot detector namely Attentional Feature Refinement and Alignment Network (AFRAN) is proposed for detecting aircraft in SAR images with competitive accuracy and speed.
arXiv Detail & Related papers (2022-01-18T16:54:49Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.