Study of List-Based OMP and an Enhanced Model for Direction Finding with
Non-Uniform Arrays
- URL: http://arxiv.org/abs/2105.03774v1
- Date: Sat, 8 May 2021 20:43:13 GMT
- Title: Study of List-Based OMP and an Enhanced Model for Direction Finding with
Non-Uniform Arrays
- Authors: W. S. Leite and R. C. de Lamare
- Abstract summary: This paper proposes an enhanced coarray transformation model (EDCTM) and a mixed greedy maximum likelihood called List-Based Likelihood Orthogonal Matching Pursuit (LBML-OMP)
The proposed EDCTM approach obtains improved estimates when Khatri-Rao product-based models are used to generate difference coarrays under the assumption of uncorrelated sources.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes an enhanced coarray transformation model (EDCTM) and a
mixed greedy maximum likelihood algorithm called List-Based Maximum Likelihood
Orthogonal Matching Pursuit (LBML-OMP) for direction-of-arrival estimation with
non-uniform linear arrays (NLAs). The proposed EDCTM approach obtains improved
estimates when Khatri-Rao product-based models are used to generate difference
coarrays under the assumption of uncorrelated sources. In the proposed LBML-OMP
technique, for each iteration a set of candidates is generated based on the
correlation-maximization between the dictionary and the residue vector.
LBML-OMP then chooses the best candidate based on a reduced-complexity
asymptotic maximum likelihood decision rule. Simulations show the improved
results of EDCTM over existing approaches and that LBML-OMP outperforms
existing sparse recovery algorithms as well as Spatial Smoothing Multiple
Signal Classification with NLAs.
Related papers
- Graph-Structured Speculative Decoding [52.94367724136063]
Speculative decoding has emerged as a promising technique to accelerate the inference of Large Language Models.
We introduce an innovative approach utilizing a directed acyclic graph (DAG) to manage the drafted hypotheses.
We observe a remarkable speedup of 1.73$times$ to 1.96$times$, significantly surpassing standard speculative decoding.
arXiv Detail & Related papers (2024-07-23T06:21:24Z) - Regularized Projection Matrix Approximation with Applications to Community Detection [1.3761665705201904]
This paper introduces a regularized projection matrix approximation framework designed to recover cluster information from the affinity matrix.
We investigate three distinct penalty functions, each specifically tailored to address bounded, positive, and sparse scenarios.
Numerical experiments conducted on both synthetic and real-world datasets reveal that our regularized projection matrix approximation approach significantly outperforms state-of-the-art methods in clustering performance.
arXiv Detail & Related papers (2024-05-26T15:18:22Z) - Multi-Reference Preference Optimization for Large Language Models [56.84730239046117]
We introduce a novel closed-form formulation for direct preference optimization using multiple reference models.
The resulting algorithm, Multi-Reference Preference Optimization (MRPO), leverages broader prior knowledge from diverse reference models.
Our experiments demonstrate that LLMs finetuned with MRPO generalize better in various preference data, regardless of data scarcity or abundance.
arXiv Detail & Related papers (2024-05-26T00:29:04Z) - Algorithme EM r\'egularis\'e [0.0]
This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size.
Experiments on real data highlight the good performance of the proposed algorithm for clustering purposes.
arXiv Detail & Related papers (2023-07-04T23:19:25Z) - Regularized EM algorithm [9.367612782346205]
We present a regularized EM algorithm for GMM-s that can make efficient use of such prior knowledge as well as cope with LSS situations.
We show that the theoretical guarantees of convergence hold, leading to better performing EM algorithm for structured covariance matrix models or with low sample settings.
arXiv Detail & Related papers (2023-03-27T08:32:20Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Low-Cost Maximum Entropy Covariance Matrix Reconstruction Algorithm for
Robust Adaptive Beamforming [0.0]
We present a novel low-complexity adaptive beamforming technique using a gradient algorithm to avoid matrix inversions.
The proposed method exploits algorithms based on the maximum entropy power spectrum (MEPS) to estimate the noise-plus-interference covariance matrix (MEPS-NPIC)
arXiv Detail & Related papers (2020-12-28T16:26:55Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.