Unsupervised Domain Adaptation across FMCW Radar Configurations Using
Margin Disparity Discrepancy
- URL: http://arxiv.org/abs/2203.04588v1
- Date: Wed, 9 Mar 2022 09:11:06 GMT
- Title: Unsupervised Domain Adaptation across FMCW Radar Configurations Using
Margin Disparity Discrepancy
- Authors: Rodrigo Hernangomez, Igor Bjelakovic, Lorenzo Servadei, and Slawomir
Stanczak
- Abstract summary: In this work, we consider the problem of unsupervised domain adaptation across radar configurations in the context of deep-learning human activity classification.
We focus on the theory-inspired technique of Margin Disparity Discrepancy, which has already been proved successful in the area of computer vision.
Our experiments extend this technique to radar data, achieving a comparable accuracy to fewshot supervised approaches for the same classification problem.
- Score: 17.464353263281907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Commercial radar sensing is gaining relevance and machine learning algorithms
constitute one of the key components that are enabling the spread of this radio
technology into areas like surveillance or healthcare. However, radar datasets
are still scarce and generalization cannot be yet achieved for all radar
systems, environment conditions or design parameters. A certain degree of fine
tuning is, therefore, usually required to deploy machine-learning-enabled radar
applications. In this work, we consider the problem of unsupervised domain
adaptation across radar configurations in the context of deep-learning human
activity classification using frequency-modulated continuous-wave. For that, we
focus on the theory-inspired technique of Margin Disparity Discrepancy, which
has already been proved successful in the area of computer vision. Our
experiments extend this technique to radar data, achieving a comparable
accuracy to fewshot supervised approaches for the same classification problem.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Bootstrapping Autonomous Driving Radars with Self-Supervised Learning [13.13679517730015]
Training radar models is hindered by the cost and difficulty of annotating large-scale radar data.
We propose a self-supervised learning framework to leverage the large amount of unlabeled radar data to pre-train radar-only embeddings for self-driving perception tasks.
When used for downstream object detection, we demonstrate that the proposed self-supervision framework can improve the accuracy of state-of-the-art supervised baselines by $5.8%$ in mAP.
arXiv Detail & Related papers (2023-12-07T18:38:39Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Keep off the Grass: Permissible Driving Routes from Radar with Weak
Audio Supervision [21.222339098241616]
Perception systems based on FMCW scanning radar maintain full performance regardless of environmental conditions.
By combining odometry, GPS and the terrain labels from the audio classifier, we are able to construct a terrain labelled trajectory of the robot.
Using a curriculum learning procedure, we then train a radar segmentation network to generalise beyond the initial labelling.
arXiv Detail & Related papers (2020-05-11T15:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.