Position-Aware Self-supervised Representation Learning for Cross-mode Radar Signal Recognition
- URL: http://arxiv.org/abs/2602.11196v1
- Date: Fri, 30 Jan 2026 16:37:24 GMT
- Title: Position-Aware Self-supervised Representation Learning for Cross-mode Radar Signal Recognition
- Authors: Hongyang Zhang, Haitao Zhang, Yinhao Liu, Kunjie Lin, Yue Huang, Xinghao Ding,
- Abstract summary: We propose a position-aware self-supervised framework that leverages pulse-level temporal dynamics without complex augmentations or masking.<n>Using this framework, we evaluate cross-mode radar signal recognition under the long-tailed setting to assess adaptability and generalization.<n> Experimental results demonstrate enhanced discriminability and robustness, highlighting practical applicability in real-world electromagnetic environments.
- Score: 45.534027689090664
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Radar signal recognition in open electromagnetic environments is challenging due to diverse operating modes and unseen radar types. Existing methods often overlook position relations in pulse sequences, limiting their ability to capture semantic dependencies over time. We propose RadarPos, a position-aware self-supervised framework that leverages pulse-level temporal dynamics without complex augmentations or masking, providing improved position relation modeling over contrastive learning or masked reconstruction. Using this framework, we evaluate cross-mode radar signal recognition under the long-tailed setting to assess adaptability and generalization. Experimental results demonstrate enhanced discriminability and robustness, highlighting practical applicability in real-world electromagnetic environments.
Related papers
- RadarGen: Automotive Radar Point Cloud Generation from Cameras [64.69976771710057]
We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery.<n>RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form.<n>We show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data.
arXiv Detail & Related papers (2025-12-19T18:57:33Z) - LRFusionPR: A Polar BEV-Based LiDAR-Radar Fusion Network for Place Recognition [2.77989705536351]
In autonomous driving, place recognition is critical for global localization in GPS-denied environments.<n>We propose LRFusionPR, which improves recognition accuracy and robustness by fusing LiDAR with either single-chip or scanning radar.
arXiv Detail & Related papers (2025-04-27T10:20:32Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Multi-stage Learning for Radar Pulse Activity Segmentation [51.781832424705094]
Radio signal recognition is a crucial function in electronic warfare.
Precise identification and localisation of radar pulse activities are required by electronic warfare systems.
Deep learning-based radar pulse activity recognition methods have remained largely underexplored.
arXiv Detail & Related papers (2023-12-15T01:56:27Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Waveform Selection for Radar Tracking in Target Channels With Memory via
Universal Learning [14.796960833031724]
Adapting the radar's waveform using partial information about the state of the scene has been shown to provide performance benefits in many practical scenarios.
This work examines a radar system which builds a compressed model of the radar-environment interface in the form of a context-tree.
The proposed approach is tested in a simulation study, and is shown to provide tracking performance improvements over two state-of-the-art waveform selection schemes.
arXiv Detail & Related papers (2021-08-02T21:27:56Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Constrained Contextual Bandit Learning for Adaptive Radar Waveform
Selection [14.796960833031724]
A sequential decision process in which an adaptive radar system repeatedly interacts with a finite-state target channel is studied.
The radar is capable of passively sensing the spectrum at regular intervals, which provides side information for the waveform selection process.
It is shown that the waveform selection problem can be effectively addressed using a linear contextual bandit formulation.
arXiv Detail & Related papers (2021-03-09T16:43:50Z) - Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning [11.259276512983492]
In this paper, a heterogeneous measurements based framework is proposed for long-term place recognition.
A deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition.
The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar, radar-to-radar and radar-to-lidar, while the learned model is trained only once.
arXiv Detail & Related papers (2021-01-30T15:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.