Evolving Losses for Unsupervised Video Representation Learning
- URL: http://arxiv.org/abs/2002.12177v1
- Date: Wed, 26 Feb 2020 16:56:07 GMT
- Title: Evolving Losses for Unsupervised Video Representation Learning
- Authors: AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo
- Abstract summary: We present a new method to learn video representations from large-scale unlabeled video data.
The proposed unsupervised representation learning results in a single RGB network and outperforms previous methods.
- Score: 91.2683362199263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new method to learn video representations from large-scale
unlabeled video data. Ideally, this representation will be generic and
transferable, directly usable for new tasks such as action recognition and zero
or few-shot learning. We formulate unsupervised representation learning as a
multi-modal, multi-task learning problem, where the representations are shared
across different modalities via distillation. Further, we introduce the concept
of loss function evolution by using an evolutionary search algorithm to
automatically find optimal combination of loss functions capturing many
(self-supervised) tasks and modalities. Thirdly, we propose an unsupervised
representation evaluation metric using distribution matching to a large
unlabeled dataset as a prior constraint, based on Zipf's law. This unsupervised
constraint, which is not guided by any labeling, produces similar results to
weakly-supervised, task-specific ones. The proposed unsupervised representation
learning results in a single RGB network and outperforms previous methods.
Notably, it is also more effective than several label-based methods (e.g.,
ImageNet), with the exception of large, fully labeled video datasets.
Related papers
- Weakly Supervised Video Individual CountingWeakly Supervised Video
Individual Counting [126.75545291243142]
Video Individual Counting aims to predict the number of unique individuals in a single video.
We introduce a weakly supervised VIC task, wherein trajectory labels are not provided.
In doing so, we devise an end-to-end trainable soft contrastive loss to drive the network to distinguish inflow, outflow, and the remaining.
arXiv Detail & Related papers (2023-12-10T16:12:13Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - The Trade-off between Universality and Label Efficiency of
Representations from Contrastive Learning [32.15608637930748]
We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously.
We provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks, it puts less emphasis on task-specific features.
arXiv Detail & Related papers (2023-02-28T22:14:33Z) - On minimal variations for unsupervised representation learning [19.055611167696238]
Unsupervised representation learning aims at describing raw data efficiently to solve various downstream tasks.
Revealing minimal variations as a guiding principle behind unsupervised representation learning paves the way to better practical guidelines for self-supervised learning algorithms.
arXiv Detail & Related papers (2022-11-07T18:57:20Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Low-rank Dictionary Learning for Unsupervised Feature Selection [11.634317251468968]
We introduce a novel unsupervised feature selection approach by applying dictionary learning ideas in a low-rank representation.
A unified objective function for unsupervised feature selection is proposed in a sparse way by an $ell_2,1$-norm regularization.
Our experimental findings reveal that the proposed method outperforms the state-of-the-art algorithm.
arXiv Detail & Related papers (2021-06-21T13:39:10Z) - Large-Scale Unsupervised Person Re-Identification with Contrastive
Learning [17.04597303816259]
Most existing unsupervised and domain adaptation ReID methods utilize only the public datasets in their experiments.
Inspired by the recent progress of large-scale self-supervised image classification using contrastive learning, we propose to learn ReID representation from large-scale unlabeled surveillance video alone.
arXiv Detail & Related papers (2021-05-17T14:55:08Z) - Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments [57.33699905852397]
We propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.
Our method simultaneously clusters the data while enforcing consistency between cluster assignments.
Our method can be trained with large and small batches and can scale to unlimited amounts of data.
arXiv Detail & Related papers (2020-06-17T14:00:42Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.