K-Deep Simplex: Deep Manifold Learning via Local Dictionaries
- URL: http://arxiv.org/abs/2012.02134v2
- Date: Thu, 25 Feb 2021 04:20:48 GMT
- Title: K-Deep Simplex: Deep Manifold Learning via Local Dictionaries
- Authors: Pranay Tankala, Abiy Tasissa, James M. Murphy, Demba Ba
- Abstract summary: K-Deep Simplex is a unified optimization framework for nonlinear dimensionality reduction.
Our approach learns local dictionaries that represent a data point with reconstruction coefficients supported on the probability simplex.
Experiments show that the algorithm is highly efficient and performs competitively on synthetic and real data sets.
- Score: 10.261890123213623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose K-Deep Simplex (KDS), a unified optimization framework for
nonlinear dimensionality reduction that combines the strengths of manifold
learning and sparse dictionary learning. Our approach learns local dictionaries
that represent a data point with reconstruction coefficients supported on the
probability simplex. The dictionaries are learned using algorithm unrolling, an
increasingly popular technique for structured deep learning. KDS enjoys
tremendous computational advantages over related approaches and is both
interpretable and flexible. In particular, KDS is quasilinear in the number of
data points with scaling that depends on intrinsic geometric properties of the
data. We apply KDS to the unsupervised clustering problem and prove theoretical
performance guarantees. Experiments show that the algorithm is highly efficient
and performs competitively on synthetic and real data sets.
Related papers
- Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Decentralized Complete Dictionary Learning via $\ell^{4}$-Norm
Maximization [1.2995632804090198]
We propose a novel decentralized complete dictionary learning algorithm, which is based on $ell4$-norm.
Compared with existing decentralized dictionary learning algorithms, the novel algorithm has significant advantages in terms of per-iteration computational complexity, communication cost, and convergence rate in many scenarios.
arXiv Detail & Related papers (2022-11-07T15:36:08Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - DeepDPM: Deep Clustering With an Unknown Number of Clusters [6.0803541683577444]
We introduce an effective deep-clustering method that does not require knowing the value of K as it infers it during the learning.
Using a split/merge framework, a dynamic architecture that adapts to the changing K, and a novel loss, our proposed method outperforms existing nonparametric methods.
arXiv Detail & Related papers (2022-03-27T14:11:06Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - A Sparse Structure Learning Algorithm for Bayesian Network
Identification from Discrete High-Dimensional Data [0.40611352512781856]
This paper addresses the problem of learning a sparse structure Bayesian network from high-dimensional discrete data.
We propose a score function that satisfies the sparsity and the DAG property simultaneously.
Specifically, we use a variance reducing method in our optimization algorithm to make the algorithm work efficiently in high-dimensional data.
arXiv Detail & Related papers (2021-08-21T12:21:01Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal
Retrieval [41.125141897096874]
Cross-modal hashing is favored for its effectiveness and efficiency.
Most existing methods do not sufficiently exploit the discriminative power of semantic information when learning the hash codes.
We propose Fast Discriminative Discrete Hashing (FDDH) approach for large-scale cross-modal retrieval.
arXiv Detail & Related papers (2021-05-15T03:53:48Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.