Poisson Conjugate Prior for PHD Filtering based Track-Before-Detect
Strategies in Radar Systems
- URL: http://arxiv.org/abs/2302.11356v1
- Date: Wed, 22 Feb 2023 13:03:31 GMT
- Title: Poisson Conjugate Prior for PHD Filtering based Track-Before-Detect
Strategies in Radar Systems
- Authors: Haiyi Mao, Cong Peng, Yue Liu, Jinping Tang, Hua Peng and Wei Yi
- Abstract summary: We propose a principled closed-form solution of TBD-PHD filter for low signal-to-noise ratio (SNR) scenarios.
Also, sequential Monte Carlo implementations of dynamic and amplitude echo models are proposed for the radar system.
- Score: 9.04251355210029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A variety of filters with track-before-detect (TBD) strategies have been
developed and applied to low signal-to-noise ratio (SNR) scenarios, including
the probability hypothesis density (PHD) filter. Assumptions of the standard
point measurement model based on detect-before-track (DBT) strategies are not
suitable for the amplitude echo model based on TBD strategies. However, based
on different models and unmatched assumptions, the measurement update formulas
for DBT-PHD filter are just mechanically applied to existing TBD-PHD filters.
In this paper, based on the Kullback-Leibler divergence minimization criterion,
finite set statistics theory and rigorous Bayes rule, a principled closed-form
solution of TBD-PHD filter is derived. Furthermore, we emphasize that PHD
filter is conjugated to the Poisson prior based on TBD strategies. Next, a
capping operation is devised to handle the divergence of target number
estimation as SNR increases. Moreover, the sequential Monte Carlo
implementations of dynamic and amplitude echo models are proposed for the radar
system. Finally, Monte Carlo experiments exhibit good performance in Rayleigh
noise and low SNR scenarios.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Learned Pulse Shaping Design for PAPR Reduction in DFT-s-OFDM [13.870974874382025]
We propose a machine learning-based framework to determine the FDSS filter, optimizing a tradeoff between the symbol error rate (SER), the PAPR, and the spectral flatness requirements.
numerical results show that learned FDSS filters lower the PAPR compared to conventional baselines, with minimal SER degradation.
arXiv Detail & Related papers (2024-04-24T18:50:56Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Nonlinear Filtering with Brenier Optimal Transport Maps [4.745059103971596]
This paper is concerned with the problem of nonlinear filtering, i.e., computing the conditional distribution of the state of a dynamical system.
Conventional sequential importance resampling (SIR) particle filters suffer from fundamental limitations, in scenarios involving degenerate likelihoods or high-dimensional states.
In this paper, we explore an alternative method, which is based on estimating the Brenier optimal transport (OT) map from the current prior distribution of the state to the posterior distribution at the next time step.
arXiv Detail & Related papers (2023-10-21T01:34:30Z) - A Provably Efficient Model-Free Posterior Sampling Method for Episodic
Reinforcement Learning [50.910152564914405]
Existing posterior sampling methods for reinforcement learning are limited by being model-based or lack worst-case theoretical guarantees beyond linear MDPs.
This paper proposes a new model-free formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees.
arXiv Detail & Related papers (2022-08-23T12:21:01Z) - Neural Network-augmented Kalman Filtering for Robust Online Speech
Dereverberation in Noisy Reverberant Environments [13.49645012479288]
A neural network-augmented algorithm for noise-robust online dereverberation is proposed.
The presented framework allows for robust dereverberation on a single-channel noisy reverberant dataset.
arXiv Detail & Related papers (2022-04-06T11:38:04Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Adaptive Low-Pass Filtering using Sliding Window Gaussian Processes [71.23286211775084]
We propose an adaptive low-pass filter based on Gaussian process regression.
We show that the estimation error of the proposed method is uniformly bounded.
arXiv Detail & Related papers (2021-11-05T17:06:59Z) - Predicting Flat-Fading Channels via Meta-Learned Closed-Form Linear
Filters and Equilibrium Propagation [38.42468500092177]
Predicting fading channels is a classical problem with a vast array of applications.
In practice, the Doppler spectrum is unknown, and the predictor has only access to a limited time series of estimated channels.
This paper proposes to leverage meta-learning in order to mitigate the requirements in terms of training data for channel fading prediction.
arXiv Detail & Related papers (2021-10-01T14:00:23Z) - Deep Reinforcement Learning-Based Beam Tracking for Low-Latency Services
in Vehicular Networks [39.407929561526906]
Ultra-Reliable and Low-Latency Communications (URLLC) services in vehicular networks on millimeter-wave bands present a significant challenge.
This paper gives a thorough study of this subject, by first modifying the classical approaches, e.g., Extended Kalman Filter (EKF) and Particle Filter (PF)
It then proposes a Reinforcement Learning (RL)-based approach that can achieve the URLLC requirements in a typical intersection scenario.
arXiv Detail & Related papers (2020-02-13T15:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.