A Fast and Simple Algorithm for computing the MLE of Amplitude Density
Function Parameters
- URL: http://arxiv.org/abs/2311.07951v1
- Date: Tue, 14 Nov 2023 07:04:47 GMT
- Title: A Fast and Simple Algorithm for computing the MLE of Amplitude Density
Function Parameters
- Authors: Mahdi Teimouri
- Abstract summary: In this work, the maximum likelihood estimator (MLE) is proposed for parameters of the amplitude distribution.
It is proved that the emphprojected data follow a zero-location symmetric $alpha$-stale distribution for which the MLE can be computed quite fast.
The average of computed MLEs based on two emphprojections is considered as estimator for parameters of the amplitude distribution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last decades, the family of $\alpha$-stale distributions has proven
to be useful for modelling in telecommunication systems. Particularly, in the
case of radar applications, finding a fast and accurate estimation for the
amplitude density function parameters appears to be very important. In this
work, the maximum likelihood estimator (MLE) is proposed for parameters of the
amplitude distribution. To do this, the amplitude data are \emph{projected} on
the horizontal and vertical axes using two simple transformations. It is proved
that the \emph{projected} data follow a zero-location symmetric $\alpha$-stale
distribution for which the MLE can be computed quite fast. The average of
computed MLEs based on two \emph{projections} is considered as estimator for
parameters of the amplitude distribution. Performance of the proposed
\emph{projection} method is demonstrated through simulation study and analysis
of two sets of real radar data.
Related papers
- MixLight: Borrowing the Best of both Spherical Harmonics and Gaussian Models [69.39388799906409]
Existing works estimate illumination by generating illumination maps or regressing illumination parameters.
This paper presents MixLight, a joint model that utilizes the complementary characteristics of SH and SG to achieve a more complete illumination representation.
arXiv Detail & Related papers (2024-04-19T10:17:10Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - Beyond EM Algorithm on Over-specified Two-Component Location-Scale
Gaussian Mixtures [29.26015093627193]
We develop the Exponential Location Update (ELU) algorithm to efficiently explore the curvature of the negative log-likelihood functions.
We demonstrate that the ELU algorithm converges to the final statistical radius of the models after a logarithmic number of iterations.
arXiv Detail & Related papers (2022-05-23T06:49:55Z) - Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA [0.0]
This paper describes a novel non-parametric noise reduction technique from the point of view of Bayesian inference.
It iteratively evaluates possible smoothed versions of the data, the smooth models, obtaining an estimation of the underlying signal.
Iterations stop based on the evidence and the $chi2$ statistic of the last smooth model, and we compute the expected value of the signal.
arXiv Detail & Related papers (2022-01-13T18:54:31Z) - Gaussian Process Subspace Regression for Model Reduction [7.41244589428771]
Subspace-valued functions arise in a wide range of problems, including parametric reduced order modeling (PROM)
In PROM, each parameter point can be associated with a subspace, which is used for Petrov-Galerkin projections of large system matrices.
We propose a novel Bayesian non model for subspace prediction: the Gaussian Process Subspace regression (GPS) model.
arXiv Detail & Related papers (2021-07-09T20:41:23Z) - Gravitational-wave parameter estimation with autoregressive neural
network flows [0.0]
We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks.
A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one.
We build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework.
arXiv Detail & Related papers (2020-02-18T15:44:04Z) - Statistical Outlier Identification in Multi-robot Visual SLAM using
Expectation Maximization [18.259478519717426]
This paper introduces a novel and distributed method for detecting inter-map loop closure outliers in simultaneous localization and mapping (SLAM)
The proposed algorithm does not rely on a good initialization and can handle more than two maps at a time.
arXiv Detail & Related papers (2020-02-07T06:34:44Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.