Identifying Coordination in a Cognitive Radar Network -- A
Multi-Objective Inverse Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2211.06967v1
- Date: Sun, 13 Nov 2022 17:27:39 GMT
- Title: Identifying Coordination in a Cognitive Radar Network -- A
Multi-Objective Inverse Reinforcement Learning Approach
- Authors: Luke Snow and Vikram Krishnamurthy and Brian M. Sadler
- Abstract summary: This paper provides a novel multi-objective inverse reinforcement learning approach for detecting coordination among radars.
It also applies to more general problems of inverse detection and learning of multi-objective optimizing systems.
- Score: 30.65529797672378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consider a target being tracked by a cognitive radar network. If the target
can intercept some radar network emissions, how can it detect coordination
among the radars? By 'coordination' we mean that the radar emissions satisfy
Pareto optimality with respect to multi-objective optimization over each
radar's utility. This paper provides a novel multi-objective inverse
reinforcement learning approach which allows for both detection of such Pareto
optimal ('coordinating') behavior and subsequent reconstruction of each radar's
utility function, given a finite dataset of radar network emissions. The method
for accomplishing this is derived from the micro-economic setting of Revealed
Preferences, and also applies to more general problems of inverse detection and
learning of multi-objective optimizing systems.
Related papers
- Multi-Object Tracking based on Imaging Radar 3D Object Detection [0.13499500088995461]
This paper presents an approach for tracking surrounding traffic participants with a classical tracking algorithm.
Learning based object detectors have been shown to work adequately on lidar and camera data, while learning based object detectors using standard radar data input have proven to be inferior.
With the improvements to radar sensor technology in the form of imaging radars, the object detection performance on radar was greatly improved but is still limited compared to lidar sensors due to the sparsity of the radar point cloud.
The tracking algorithm must overcome the limited detection quality while generating consistent tracks.
arXiv Detail & Related papers (2024-06-03T05:46:23Z) - Multistatic-Radar RCS-Signature Recognition of Aerial Vehicles: A Bayesian Fusion Approach [10.908489565519211]
Radar Automated Target Recognition (RATR) for Unmanned Aerial Vehicles (UAVs) involves transmitting Electromagnetic Waves (EMWs) and performing target type recognition on the received radar echo.
Previous studies highlighted the advantages of multistatic radar configurations over monostatic ones in RATR.
We propose a fully Bayesian RATR framework employing Optimal Bayesian Fusion (OBF) to aggregate classification probability vectors from multiple radars.
arXiv Detail & Related papers (2024-02-28T02:11:47Z) - End-to-End Training of Neural Networks for Automotive Radar Interference
Mitigation [9.865041274657823]
We propose a new method for training neural networks (NNs) for frequency modulated continuous wave (WFMC) radar mutual interference mitigation.
Instead of training NNs to regress from interfered to clean radar signals as in previous work, we train NNs directly on object detection maps.
We do so by performing a continuous relaxation of the cell-averaging constant false alarm rate (CA-CFAR) peak detector, which is a well-established algorithm for object detection using radar.
arXiv Detail & Related papers (2023-12-15T13:47:16Z) - Multi-stage Learning for Radar Pulse Activity Segmentation [51.781832424705094]
Radio signal recognition is a crucial function in electronic warfare.
Precise identification and localisation of radar pulse activities are required by electronic warfare systems.
Deep learning-based radar pulse activity recognition methods have remained largely underexplored.
arXiv Detail & Related papers (2023-12-15T01:56:27Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Tracking mulitple targets with multiple radars using Distributed
Auctions [0.0]
We introduce a highly resilient algorithm for radar coordination based on decentralized and collaborative bundle auctions.
Our approach allows to track simultaneously multiple targets, and to use up to two radars tracking the same target to improve accuracy.
arXiv Detail & Related papers (2023-07-31T08:14:29Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Waveform Selection for Radar Tracking in Target Channels With Memory via
Universal Learning [14.796960833031724]
Adapting the radar's waveform using partial information about the state of the scene has been shown to provide performance benefits in many practical scenarios.
This work examines a radar system which builds a compressed model of the radar-environment interface in the form of a context-tree.
The proposed approach is tested in a simulation study, and is shown to provide tracking performance improvements over two state-of-the-art waveform selection schemes.
arXiv Detail & Related papers (2021-08-02T21:27:56Z) - Automotive Radar Interference Mitigation with Unfolded Robust PCA based
on Residual Overcomplete Auto-Encoder Blocks [88.46770122522697]
In autonomous driving, radar systems play an important role in detecting targets such as other vehicles on the road.
Deep learning methods for automotive radar interference mitigation can succesfully estimate the amplitude of targets, but fail to recover the phase of the respective targets.
We propose an efficient and effective technique that is able to estimate both amplitude and phase in the presence of interference.
arXiv Detail & Related papers (2020-10-14T09:41:06Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.