Bridging Simulation and Reality: A 3D Clustering-Based Deep Learning Model for UAV-Based RF Source Localization
- URL: http://arxiv.org/abs/2502.13969v1
- Date: Sun, 02 Feb 2025 05:48:44 GMT
- Title: Bridging Simulation and Reality: A 3D Clustering-Based Deep Learning Model for UAV-Based RF Source Localization
- Authors: Saad Masrur, Ismail Guvenc,
- Abstract summary: Unmanned aerial vehicles (UAVs) offer significant advantages for RF source localization over terrestrial methods.<n>Recent advancements in deep learning (DL) have further enhanced localization accuracy, particularly for outdoor scenarios.<n>We propose the 3D Cluster-Based RealAdaptRNet, a DL-based method leveraging 3D clustering-based feature extraction for robust localization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Localization of radio frequency (RF) sources has critical applications, including search and rescue, jammer detection, and monitoring of hostile activities. Unmanned aerial vehicles (UAVs) offer significant advantages for RF source localization (RFSL) over terrestrial methods, leveraging autonomous 3D navigation and improved signal capture at higher altitudes. Recent advancements in deep learning (DL) have further enhanced localization accuracy, particularly for outdoor scenarios. DL models often face challenges in real-world performance, as they are typically trained on simulated datasets that fail to replicate real-world conditions fully. To address this, we first propose the Enhanced Two-Ray propagation model, reducing the simulation-to-reality gap by improving the accuracy of propagation environment modeling. For RFSL, we propose the 3D Cluster-Based RealAdaptRNet, a DL-based method leveraging 3D clustering-based feature extraction for robust localization. Experimental results demonstrate that the proposed Enhanced Two-Ray model provides superior accuracy in simulating real-world propagation scenarios compared to conventional free-space and two-ray models. Notably, the 3D Cluster-Based RealAdaptRNet, trained entirely on simulated datasets, achieves exceptional performance when validated in real-world environments using the AERPAW physical testbed, with an average localization error of 18.2 m. The proposed approach is computationally efficient, utilizing 33.5 times fewer parameters, and demonstrates strong generalization capabilities across diverse trajectories, making it highly suitable for real-world applications.
Related papers
- EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.
We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - Hyperspectral Images Efficient Spatial and Spectral non-Linear Model with Bidirectional Feature Learning [7.06787067270941]
We propose a novel framework that significantly reduces data volume while enhancing classification accuracy.<n>Our model employs a bidirectional reversed convolutional neural network (CNN) to efficiently extract spectral features, complemented by a specialized block for spatial feature analysis.
arXiv Detail & Related papers (2024-11-29T23:32:26Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.
The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Unleashing the Potential of Mamba: Boosting a LiDAR 3D Sparse Detector by Using Cross-Model Knowledge Distillation [22.653014803666668]
We propose a Faster LiDAR 3D object detection framework, called FASD, which implements heterogeneous model distillation by adaptively uniform cross-model voxel features.
We aim to distill the transformer's capacity for high-performance sequence modeling into Mamba models with low FLOPs, achieving a significant improvement in accuracy through knowledge transfer.
We evaluated the framework on datasets and nuScenes, achieving a 4x reduction in resource consumption and a 1-2% performance improvement over the current SoTA methods.
arXiv Detail & Related papers (2024-09-17T09:30:43Z) - OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - R3D-AD: Reconstruction via Diffusion for 3D Anomaly Detection [12.207437451118036]
3D anomaly detection plays a crucial role in monitoring parts for localized inherent defects in precision manufacturing.
Embedding-based and reconstruction-based approaches are among the most popular and successful methods.
We propose R3D-AD, reconstructing anomalous point clouds by diffusion model for precise 3D anomaly detection.
arXiv Detail & Related papers (2024-07-15T16:10:58Z) - DM3D: Distortion-Minimized Weight Pruning for Lossless 3D Object Detection [42.07920565812081]
We propose a novel post-training weight pruning scheme for 3D object detection.
It determines redundant parameters in the pretrained model that lead to minimal distortion in both locality and confidence.
This framework aims to minimize detection distortion of network output to maximally maintain detection precision.
arXiv Detail & Related papers (2024-07-02T09:33:32Z) - IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images [50.4538089115248]
Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task.
We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion.
Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods.
arXiv Detail & Related papers (2024-03-30T07:17:37Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models [1.1965844936801797]
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots.
We present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds.
Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks.
arXiv Detail & Related papers (2023-09-17T12:26:57Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.