Minimalistic Unsupervised Learning with the Sparse Manifold Transform
- URL: http://arxiv.org/abs/2209.15261v2
- Date: Thu, 27 Apr 2023 22:05:23 GMT
- Title: Minimalistic Unsupervised Learning with the Sparse Manifold Transform
- Authors: Yubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, Yann LeCun
- Abstract summary: We describe a minimalistic and interpretable method for unsupervised learning that achieves performance close to the SOTA SSL methods.
With a one-layer deterministic sparse manifold transform, one can achieve 99.3% KNN top-1 accuracy on MNIST.
With a simple gray-scale augmentation, the model gets 83.2% KNN top-1 accuracy on CIFAR-10 and 57% on CIFAR-100.
- Score: 20.344274392350094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe a minimalistic and interpretable method for unsupervised
learning, without resorting to data augmentation, hyperparameter tuning, or
other engineering designs, that achieves performance close to the SOTA SSL
methods. Our approach leverages the sparse manifold transform, which unifies
sparse coding, manifold learning, and slow feature analysis. With a one-layer
deterministic sparse manifold transform, one can achieve 99.3% KNN top-1
accuracy on MNIST, 81.1% KNN top-1 accuracy on CIFAR-10 and 53.2% on CIFAR-100.
With a simple gray-scale augmentation, the model gets 83.2% KNN top-1 accuracy
on CIFAR-10 and 57% on CIFAR-100. These results significantly close the gap
between simplistic "white-box" methods and the SOTA methods. Additionally, we
provide visualization to explain how an unsupervised representation transform
is formed. The proposed method is closely connected to latent-embedding
self-supervised methods and can be treated as the simplest form of VICReg.
Though there remains a small performance gap between our simple constructive
model and SOTA methods, the evidence points to this as a promising direction
for achieving a principled and white-box approach to unsupervised learning.
Related papers
- Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement [29.675650285351768]
Machine unlearning (MU) has emerged to enhance the privacy and trustworthiness of deep neural networks.
Approximate MU is a practical method for large-scale models.
We propose a fast-slow parameter update strategy to implicitly approximate the up-to-date salient unlearning direction.
arXiv Detail & Related papers (2024-09-29T15:17:33Z) - SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors [80.6043267994434]
We propose SVFT, a simple approach that fundamentally differs from existing methods.
SVFT updates (W) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations.
Experiments on language and vision benchmarks show that SVFT recovers up to 96% of full fine-tuning performance while training only 0.006 to 0.25% of parameters.
arXiv Detail & Related papers (2024-05-30T01:27:43Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Dynamic Sparse Training via Balancing the Exploration-Exploitation
Trade-off [19.230329532065635]
Sparse training could significantly mitigate the training costs by reducing the model size.
Existing sparse training methods mainly use either random-based or greedy-based drop-and-grow strategies.
In this work, we consider the dynamic sparse training as a sparse connectivity search problem.
Experimental results show that sparse models (up to 98% sparsity) obtained by our proposed method outperform the SOTA sparse training methods.
arXiv Detail & Related papers (2022-11-30T01:22:25Z) - Bag of Tricks for FGSM Adversarial Training [30.25966570584856]
Adversarial training (AT) with samples generated by Fast Gradient Sign Method (FGSM), also known as FGSM-AT, is a computationally simple method to train robust networks.
During its training procedure, an unstable mode of "catastrophic overfitting" has been identified in arXiv:2001.03994 [cs.LG], where the robust accuracy abruptly drops to zero within a single training step.
In this work, we provide the first study, which thoroughly examines a collection of tricks to overcome the catastrophic overfitting in FGSM-AT.
arXiv Detail & Related papers (2022-09-06T17:53:21Z) - Two Heads are Better than One: Robust Learning Meets Multi-branch Models [14.72099568017039]
We propose Branch Orthogonality adveRsarial Training (BORT) to obtain state-of-the-art performance with solely the original dataset for adversarial training.
We evaluate our approach on CIFAR-10, CIFAR-100, and SVHN against ell_infty norm-bounded perturbations of size epsilon = 8/255, respectively.
arXiv Detail & Related papers (2022-08-17T05:42:59Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks [28.502489028888608]
Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs.
In adversarial training, the unlearnability of error-minimizing noise will severely degrade.
We propose a novel model-free method, named emphOne-Pixel Shortcut, which only perturbs a single pixel of each image and makes the dataset unlearnable.
arXiv Detail & Related papers (2022-05-24T15:17:52Z) - Towards Demystifying Representation Learning with Non-contrastive
Self-supervision [82.80118139087676]
Non-contrastive methods of self-supervised learning learn representations by minimizing the distance between two views of the same image.
Tian el al. (2021) made an initial attempt on the first question and proposed DirectPred that sets the predictor directly.
We show that in a simple linear network, DirectSet($alpha$) provably learns a desirable projection matrix and also reduces the sample complexity on downstream tasks.
arXiv Detail & Related papers (2021-10-11T00:48:05Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - ScopeFlow: Dynamic Scene Scoping for Optical Flow [94.42139459221784]
We propose to modify the common training protocols of optical flow.
The improvement is based on observing the bias in sampling challenging data.
We find that both regularization and augmentation should decrease during the training protocol.
arXiv Detail & Related papers (2020-02-25T09:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.