Learning Decorrelated Representations Efficiently Using Fast Fourier
Transform
- URL: http://arxiv.org/abs/2301.01569v2
- Date: Thu, 1 Jun 2023 06:51:57 GMT
- Title: Learning Decorrelated Representations Efficiently Using Fast Fourier
Transform
- Authors: Yutaro Shigeto, Masashi Shimbo, Yuya Yoshikawa, Akikazu Takeuchi
- Abstract summary: We propose a relaxed decorrelating regularizer that can be computed in O(n d log d) time by Fast Fourier Transform.
The proposed regularizer exhibits accuracy comparable to that of existing regularizers in downstream tasks, whereas their training requires less memory and is faster for large d.
- Score: 3.932322649674071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Barlow Twins and VICReg are self-supervised representation learning models
that use regularizers to decorrelate features. Although these models are as
effective as conventional representation learning models, their training can be
computationally demanding if the dimension d of the projected embeddings is
high. As the regularizers are defined in terms of individual elements of a
cross-correlation or covariance matrix, computing the loss for n samples takes
O(n d^2) time. In this paper, we propose a relaxed decorrelating regularizer
that can be computed in O(n d log d) time by Fast Fourier Transform. We also
propose an inexpensive technique to mitigate undesirable local minima that
develop with the relaxation. The proposed regularizer exhibits accuracy
comparable to that of existing regularizers in downstream tasks, whereas their
training requires less memory and is faster for large d. The source code is
available.
Related papers
- Scalable Sparse Regression for Model Discovery: The Fast Lane to Insight [0.0]
Sparse regression applied to symbolic libraries has quickly emerged as a powerful tool for learning governing equations directly from data.
I present a general purpose, model sparse regression algorithm that extends a recently proposed exhaustive search.
It is intended to maintain agnostic sensitivity to small coefficients and be of reasonable computational cost for large symbolic libraries.
arXiv Detail & Related papers (2024-05-14T18:09:43Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Stabilizing Subject Transfer in EEG Classification with Divergence
Estimation [17.924276728038304]
We propose several graphical models to describe an EEG classification task.
We identify statistical relationships that should hold true in an idealized training scenario.
We design regularization penalties to enforce these relationships in two stages.
arXiv Detail & Related papers (2023-10-12T23:06:52Z) - Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model [89.8764435351222]
We propose a new family of unbiased estimators called WTA-CRS, for matrix production with reduced variance.
Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.
arXiv Detail & Related papers (2023-05-24T15:52:08Z) - RSC: Accelerating Graph Neural Networks Training via Randomized Sparse
Computations [56.59168541623729]
Training graph neural networks (GNNs) is time consuming because sparse graph-based operations are hard to be accelerated by hardware.
We explore trading off the computational precision to reduce the time complexity via sampling-based approximation.
We propose Randomized Sparse Computation, which for the first time demonstrate the potential of training GNNs with approximated operations.
arXiv Detail & Related papers (2022-10-19T17:25:33Z) - Loop Unrolled Shallow Equilibrium Regularizer (LUSER) -- A
Memory-Efficient Inverse Problem Solver [26.87738024952936]
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements.
We propose an LU algorithm with shallow equilibrium regularizers (L)
These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training.
arXiv Detail & Related papers (2022-10-10T19:50:37Z) - Variational Sparse Coding with Learned Thresholding [6.737133300781134]
We propose a new approach to variational sparse coding that allows us to learn sparse distributions by thresholding samples.
We first evaluate and analyze our method by training a linear generator, showing that it has superior performance, statistical efficiency, and gradient estimation.
arXiv Detail & Related papers (2022-05-07T14:49:50Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Sketching as a Tool for Understanding and Accelerating Self-attention
for Long Sequences [52.6022911513076]
Transformer-based models are not efficient in processing long sequences due to the quadratic space and time complexity of the self-attention modules.
We propose Linformer and Informer to reduce the quadratic complexity to linear (modulo logarithmic factors) via low-dimensional projection and row selection.
Based on the theoretical analysis, we propose Skeinformer to accelerate self-attention and further improve the accuracy of matrix approximation to self-attention.
arXiv Detail & Related papers (2021-12-10T06:58:05Z) - The Right Tool for the Job: Matching Model and Instance Complexities [62.95183777679024]
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.
We propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit"
We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.
arXiv Detail & Related papers (2020-04-16T04:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.