Analysis of Partially-Calibrated Sparse Subarrays for Direction Finding with Extended Degrees of Freedom
- URL: http://arxiv.org/abs/2408.03236v1
- Date: Tue, 6 Aug 2024 14:48:34 GMT
- Title: Analysis of Partially-Calibrated Sparse Subarrays for Direction Finding with Extended Degrees of Freedom
- Authors: W. S. Leite, R. C. de Lamare,
- Abstract summary: We present the Generalized Coarray Multiple Signal Classification (GCA-MUSIC) DOA estimation algorithm to scenarios with partially-calibrated sparse subarrays.
The proposed GCA-MUSIC algorithm exploits the difference coarray for each subarray, followed by a specific pseudo-spectrum merging rule.
Numerical simulations show that the proposed GCA-MUSIC has better performance than other similar strategies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the problem of direction-of-arrival (DOA) estimation using multiple partially-calibrated sparse subarrays. In particular, we present the Generalized Coarray Multiple Signal Classification (GCA-MUSIC) DOA estimation algorithm to scenarios with partially-calibrated sparse subarrays. The proposed GCA-MUSIC algorithm exploits the difference coarray for each subarray, followed by a specific pseudo-spectrum merging rule that is based on the intersection of the signal subspaces associated to each subarray. This rule assumes that there is no a priori knowledge about the cross-covariance between subarrays. In that way, only the second-order statistics of each subarray are used to estimate the directions with increased degrees of freedom, i.e., the estimation procedure preserves the coarray Multiple Signal Classification and sparse arrays properties to estimate more sources than the number of physical sensors in each subarray. Numerical simulations show that the proposed GCA-MUSIC has better performance than other similar strategies.
Related papers
- Direction of Arrival Estimation with Sparse Subarrays [0.0]
We introduce array architectures that incorporate two distinct array categories, namely type-I and type-II arrays.
We devise two Direction of Arrival (DOA) estimation algorithms that are suitable for partially-calibrated array scenarios.
The algorithms are capable of estimating a greater number of sources than the number of available physical sensors.
arXiv Detail & Related papers (2024-08-17T23:47:24Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Sparse Array Design for Direction Finding using Deep Learning [19.061021605579683]
deep learning (DL) techniques have been introduced for designing sparse arrays.
This chapter provides a synopsis of several direction finding applications of DL-based sparse arrays.
arXiv Detail & Related papers (2023-08-08T22:45:48Z) - Low-complexity subspace-descent over symmetric positive definite
manifold [9.346050098365648]
We develop low-complexity algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold.
The proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix.
arXiv Detail & Related papers (2023-05-03T11:11:46Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Sparse GCA and Thresholded Gradient Descent [9.971356146653973]
Generalized correlation analysis (GCA) is concerned with uncovering linear relationships across datasets.
We study sparse GCA when there are potentially multiple generalized correlations in data.
We propose a thresholded descent algorithm for estimating GCA loading vectors and gradient matrices in high dimensions.
arXiv Detail & Related papers (2021-07-01T11:15:20Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - A simpler spectral approach for clustering in directed networks [1.52292571922932]
We show that using the eigenvalue/eigenvector decomposition of the adjacency matrix is simpler than all common methods.
We provide numerical evidence for the superiority of the Gaussian Mixture clustering over the widely used k-means algorithm.
arXiv Detail & Related papers (2021-02-05T14:16:45Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.