Complete Dictionary Learning via $\ell_p$-norm Maximization
- URL: http://arxiv.org/abs/2002.10043v3
- Date: Wed, 15 Jul 2020 11:58:45 GMT
- Title: Complete Dictionary Learning via $\ell_p$-norm Maximization
- Authors: Yifei Shen, Ye Xue, Jun Zhang, Khaled B. Letaief, and Vincent Lau
- Abstract summary: We investigate a family of $ell_p$-norm ($p>2,p in mathbbN$) approaches for the complete dictionary learning problem.
We show that the global maximizers of these formulations are very close to the true dictionary with high probability, even when Gaussian noise is present.
Experiments will demonstrate that the $ell_p$-based approaches enjoy a higher computational efficiency and better robustness than conventional approaches.
- Score: 10.82081111170268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dictionary learning is a classic representation learning method that has been
widely applied in signal processing and data analytics. In this paper, we
investigate a family of $\ell_p$-norm ($p>2,p \in \mathbb{N}$) maximization
approaches for the complete dictionary learning problem from theoretical and
algorithmic aspects. Specifically, we prove that the global maximizers of these
formulations are very close to the true dictionary with high probability, even
when Gaussian noise is present. Based on the generalized power method (GPM), an
efficient algorithm is then developed for the $\ell_p$-based formulations. We
further show the efficacy of the developed algorithm: for the population GPM
algorithm over the sphere constraint, it first quickly enters the neighborhood
of a global maximizer, and then converges linearly in this region. Extensive
experiments will demonstrate that the $\ell_p$-based approaches enjoy a higher
computational efficiency and better robustness than conventional approaches and
$p=3$ performs the best.
Related papers
- Robust Sparse Estimation for Gaussians with Optimal Error under Huber Contamination [42.526664955704746]
We study sparse estimation tasks in Huber's contamination model with a focus on mean estimation, PCA, and linear regression.
For each of these tasks, we give the first sample and computationally efficient robust estimators with optimal error guarantees.
At the technical level, we develop a novel multidimensional filtering method in the sparse regime that may find other applications.
arXiv Detail & Related papers (2024-03-15T15:51:27Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Robust Methods for High-Dimensional Linear Learning [0.0]
We propose statistically robust and computationally efficient linear learning methods in the high-dimensional batch setting.
We instantiate our framework on several applications including vanilla sparse, group-sparse and low-rank matrix recovery.
For vanilla $s$-sparsity, we are able to reach the $slog (d)/n$ rate under heavy-tails and $eta$-corruption.
arXiv Detail & Related papers (2022-08-10T17:00:41Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.