Gridless Evolutionary Approach for Line Spectral Estimation with Unknown
Model Order
- URL: http://arxiv.org/abs/2106.07323v1
- Date: Mon, 14 Jun 2021 11:54:37 GMT
- Title: Gridless Evolutionary Approach for Line Spectral Estimation with Unknown
Model Order
- Authors: Bai Yan, Qi Zhao, Jin Zhang, J. Andrew Zhang, Xin Yao
- Abstract summary: We propose a novel idea of simultaneously estimating the frequencies and model order by means of the atomic $l_0$ norm.
We also design a variable-length evolutionary algorithm to solve the proposed model.
Simulation results confirm the superiority of our approach in both frequency estimation and model order selection.
- Score: 31.507466525490123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gridless methods show great superiority in line spectral estimation. These
methods need to solve an atomic $l_0$ norm (i.e., the continuous analog of
$l_0$ norm) minimization problem to estimate frequencies and model order. Since
this problem is NP-hard to compute, relaxations of atomic $l_0$ norm, such as
nuclear norm and reweighted atomic norm, have been employed for promoting
sparsity. However, the relaxations give rise to a resolution limit,
subsequently leading to biased model order and convergence error. To overcome
the above shortcomings of relaxation, we propose a novel idea of simultaneously
estimating the frequencies and model order by means of the atomic $l_0$ norm.
To accomplish this idea, we build a multiobjective optimization model. The
measurment error and the atomic $l_0$ norm are taken as the two optimization
objectives. The proposed model directly exploits the model order via the atomic
$l_0$ norm, thus breaking the resolution limit. We further design a
variable-length evolutionary algorithm to solve the proposed model, which
includes two innovations. One is a variable-length coding and search strategy.
It flexibly codes and interactively searches diverse solutions with different
model orders. These solutions act as steppingstones that help fully exploring
the variable and open-ended frequency search space and provide extensive
potentials towards the optima. Another innovation is a model order pruning
mechanism, which heuristically prunes less contributive frequencies within the
solutions, thus significantly enhancing convergence and diversity. Simulation
results confirm the superiority of our approach in both frequency estimation
and model order selection.
Related papers
- Training Deep Learning Models with Norm-Constrained LMOs [56.00317694850397]
We study optimization methods that leverage the linear minimization oracle (LMO) over a norm-ball.
We propose a new family of algorithms that uses the LMO to adapt to the geometry of the problem and, perhaps surprisingly, show that they can be applied to unconstrained problems.
arXiv Detail & Related papers (2025-02-11T13:10:34Z) - Implicit Bias in Matrix Factorization and its Explicit Realization in a New Architecture [36.53793044674861]
Gradient descent for matrix factorization is known to exhibit an implicit bias toward approximately low-rank solutions.
We introduce a new factorization model: $Xapprox UDVtop$, where $U$ and $V$ are constrained within norm balls, while $D$ is a diagonal factor allowing the model to span the entire search space.
arXiv Detail & Related papers (2025-01-27T18:56:22Z) - E$^2$M: Double Bounded $α$-Divergence Optimization for Tensor-based Discrete Density Estimation [3.9633191508712398]
We present a generalization of the expectation-maximization (EM) algorithm, called E$2M algorithm.<n>It circumvents this issue by first relaxing the optimization into minimization of a surrogate objective based on the Kullback-Leibler (KL) divergence.<n>Our approach offers flexible modeling for not only a variety of low-rank structures, including the CP, Tucker, and Train formats.
arXiv Detail & Related papers (2024-05-28T14:28:28Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Infinite-Dimensional Sparse Learning in Linear System Identification [0.2867517731896504]
This paper proposes an infinite-dimensional sparse learning algorithm based on atomic norm regularization.
The difficulty in solving the problem lies in the fact that there are an infinite number of possible atomic models.
arXiv Detail & Related papers (2022-03-28T13:18:48Z) - Fast Feature Selection with Fairness Constraints [49.142308856826396]
We study the fundamental problem of selecting optimal features for model construction.
This problem is computationally challenging on large datasets, even with the use of greedy algorithm variants.
We extend the adaptive query model, recently proposed for the greedy forward selection for submodular functions, to the faster paradigm of Orthogonal Matching Pursuit for non-submodular functions.
The proposed algorithm achieves exponentially fast parallel run time in the adaptive query model, scaling much better than prior work.
arXiv Detail & Related papers (2022-02-28T12:26:47Z) - Sampling Approximately Low-Rank Ising Models: MCMC meets Variational
Methods [35.24886589614034]
We consider quadratic definite Ising models on the hypercube with a general interaction $J$.
Our general result implies the first time sampling algorithms for low-rank Ising models.
arXiv Detail & Related papers (2022-02-17T21:43:50Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Universal and data-adaptive algorithms for model selection in linear
contextual bandits [52.47796554359261]
We consider the simplest non-trivial instance of model-selection: distinguishing a simple multi-armed bandit problem from a linear contextual bandit problem.
We introduce new algorithms that explore in a data-adaptive manner and provide guarantees of the form $mathcalO(dalpha T1- alpha)$.
Our approach extends to model selection among nested linear contextual bandits under some additional assumptions.
arXiv Detail & Related papers (2021-11-08T18:05:35Z) - Sharp global convergence guarantees for iterative nonconvex
optimization: A Gaussian process perspective [30.524043513721168]
We develop a general recipe for analyzing the convergence of iterative algorithms for a class of regression models.
deterministicly, we accurately capture both the convergence rate of the algorithm and the eventual error floor in the finite-sample regime.
We show sharp convergence rates for both higher-order algorithms based on alternating updates and first-order algorithms based on subgradient subgradients.
arXiv Detail & Related papers (2021-09-20T21:48:19Z) - Fast Batch Nuclear-norm Maximization and Minimization for Robust Domain
Adaptation [154.2195491708548]
We study the prediction discriminability and diversity by studying the structure of the classification output matrix of a randomly selected data batch.
We propose Batch Nuclear-norm Maximization and Minimization, which performs nuclear-norm on the target output matrix to enhance the target prediction ability.
Experiments show that our method could boost the adaptation accuracy and robustness under three typical domain adaptation scenarios.
arXiv Detail & Related papers (2021-07-13T15:08:32Z) - A Bayesian Approach to Block-Term Tensor Decomposition Model Selection
and Computation [10.91885508254207]
The so-called block-term decomposition (BTD) tensor model, especially in its rank-$(L_r,L_r,1)$ version, has been recently receiving increasing attention.
The challenge of estimating the BTD model structure, namely the number of block terms and their individual ranks, has only recently started to attract significant attention.
A Bayesian approach is taken to addressing the problem of rank-$(L_r,L_r,1)$ BTD model selection and computation, based on the idea of imposing column sparsity emphjointly on the
arXiv Detail & Related papers (2021-01-08T09:37:21Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.