Deep Unfolded Tensor Robust PCA with Self-supervised Learning
- URL: http://arxiv.org/abs/2212.11346v1
- Date: Wed, 21 Dec 2022 20:34:42 GMT
- Title: Deep Unfolded Tensor Robust PCA with Self-supervised Learning
- Authors: Harry Dong, Megna Shah, Sean Donegan, Yuejie Chi
- Abstract summary: We describe a fast and simple self-supervised model for tensor RPCA using deep unfolding.
Our model expunges the need for ground truth labels while maintaining competitive or even greater performance.
We demonstrate these claims on a mix of synthetic data and real-world tasks.
- Score: 21.710932587432396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor robust principal component analysis (RPCA), which seeks to separate a
low-rank tensor from its sparse corruptions, has been crucial in data science
and machine learning where tensor structures are becoming more prevalent. While
powerful, existing tensor RPCA algorithms can be difficult to use in practice,
as their performance can be sensitive to the choice of additional
hyperparameters, which are not straightforward to tune. In this paper, we
describe a fast and simple self-supervised model for tensor RPCA using deep
unfolding by only learning four hyperparameters. Despite its simplicity, our
model expunges the need for ground truth labels while maintaining competitive
or even greater performance compared to supervised deep unfolding. Furthermore,
our model is capable of operating in extreme data-starved scenarios. We
demonstrate these claims on a mix of synthetic data and real-world tasks,
comparing performance against previously studied supervised deep unfolding
methods and Bayesian optimization baselines.
Related papers
- PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Deep Knowledge Tracing is an implicit dynamic multidimensional item
response theory model [25.894399244406287]
Deep knowledge tracing (DKT) is a competitive model for knowledge tracing relying on recurrent neural networks.
In this paper, we frame deep knowledge tracing as a encoderdecoder architecture.
We show that a simpler decoder, with possibly fewer parameters than the one used by DKT, can predict student performance better.
arXiv Detail & Related papers (2023-08-18T09:32:49Z) - Phantom Embeddings: Using Embedding Space for Model Regularization in
Deep Neural Networks [12.293294756969477]
The strength of machine learning models stems from their ability to learn complex function approximations from data.
The complex models tend to memorize the training data, which results in poor regularization performance on test data.
We present a novel approach to regularize the models by leveraging the information-rich latent embeddings and their high intra-class correlation.
arXiv Detail & Related papers (2023-04-14T17:15:54Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Last Layer Re-Training is Sufficient for Robustness to Spurious
Correlations [51.552870594221865]
We show that last layer retraining can match or outperform state-of-the-art approaches on spurious correlation benchmarks.
We also show that last layer retraining on large ImageNet-trained models can significantly reduce reliance on background and texture information.
arXiv Detail & Related papers (2022-04-06T16:55:41Z) - Efficient Tensor Robust PCA under Hybrid Model of Tucker and Tensor
Train [33.33426557160802]
We propose an efficient principal component analysis (TRPCA) under hybrid model of Tucker and TT.
Specifically, in theory we reveal that TT nuclear norm (TTNN) of the original big tensor can be equivalently converted to that of a much smaller tensor via a Tucker compression format.
Numerical experiments on both synthetic and real-world tensor data verify the superiority of the proposed model.
arXiv Detail & Related papers (2021-12-20T01:15:45Z) - Learned Robust PCA: A Scalable Deep Unfolding Approach for
High-Dimensional Outlier Detection [23.687598836093333]
Robust principal component analysis is a critical tool in machine learning, which detects outliers in the task of low-rank reconstruction.
In this paper, we propose a scalable and learnable approach for high-dimensional RPCA problems which we call LRPCA.
We show that LRPCA outperforms the state-of-the-art RPCA algorithms, such as neurald AltProj, on both datasets real-world applications.
arXiv Detail & Related papers (2021-10-11T23:37:55Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.