Watching the World Go By: Representation Learning from Unlabeled Videos
- URL: http://arxiv.org/abs/2003.07990v2
- Date: Thu, 7 May 2020 17:23:14 GMT
- Title: Watching the World Go By: Representation Learning from Unlabeled Videos
- Authors: Daniel Gordon, Kiana Ehsani, Dieter Fox, Ali Farhadi
- Abstract summary: Recent single image unsupervised representation learning techniques show remarkable success on a variety of tasks.
In this paper, we argue that videos offer this natural augmentation for free.
We propose Video Noise Contrastive Estimation, a method for using unlabeled video to learn strong, transferable single image representations.
- Score: 78.22211989028585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent single image unsupervised representation learning techniques show
remarkable success on a variety of tasks. The basic principle in these works is
instance discrimination: learning to differentiate between two augmented
versions of the same image and a large batch of unrelated images. Networks
learn to ignore the augmentation noise and extract semantically meaningful
representations. Prior work uses artificial data augmentation techniques such
as cropping, and color jitter which can only affect the image in superficial
ways and are not aligned with how objects actually change e.g. occlusion,
deformation, viewpoint change. In this paper, we argue that videos offer this
natural augmentation for free. Videos can provide entirely new views of
objects, show deformation, and even connect semantically similar but visually
distinct concepts. We propose Video Noise Contrastive Estimation, a method for
using unlabeled video to learn strong, transferable single image
representations. We demonstrate improvements over recent unsupervised single
image techniques, as well as over fully supervised ImageNet pretraining, across
a variety of temporal and non-temporal tasks. Code and the Random Related Video
Views dataset are available at https://www.github.com/danielgordon10/vince
Related papers
- Time Does Tell: Self-Supervised Time-Tuning of Dense Image
Representations [79.87044240860466]
We propose a novel approach that incorporates temporal consistency in dense self-supervised learning.
Our approach, which we call time-tuning, starts from image-pretrained models and fine-tunes them with a novel self-supervised temporal-alignment clustering loss on unlabeled videos.
Time-tuning improves the state-of-the-art by 8-10% for unsupervised semantic segmentation on videos and matches it for images.
arXiv Detail & Related papers (2023-08-22T21:28:58Z) - Guess What Moves: Unsupervised Video and Image Segmentation by
Anticipating Motion [92.80981308407098]
We propose an approach that combines the strengths of motion-based and appearance-based segmentation.
We propose to supervise an image segmentation network, tasking it with predicting regions that are likely to contain simple motion patterns.
In the unsupervised video segmentation mode, the network is trained on a collection of unlabelled videos, using the learning process itself as an algorithm to segment these videos.
arXiv Detail & Related papers (2022-05-16T17:55:34Z) - JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion
Retargeting [53.28477676794658]
unsupervised motion in videos has seen substantial advancements through the use of deep neural networks.
We introduce JOKR - a JOint Keypoint Representation that handles both the source and target videos, without requiring any object prior or data collection.
We evaluate our method both qualitatively and quantitatively, and demonstrate that our method handles various cross-domain scenarios, such as different animals, different flowers, and humans.
arXiv Detail & Related papers (2021-06-17T17:32:32Z) - Contrastive Learning of Image Representations with Cross-Video
Cycle-Consistency [13.19476138523546]
Cross-video relation has barely been explored for visual representation learning.
We propose a novel contrastive learning method which explores the cross-video relation by using cycle-consistency for general image representation learning.
We show significant improvement over state-of-the-art contrastive learning methods.
arXiv Detail & Related papers (2021-05-13T17:59:11Z) - Self-Supervised Representation Learning from Flow Equivariance [97.13056332559526]
We present a new self-supervised learning representation framework that can be directly deployed on a video stream of complex scenes.
Our representations, learned from high-resolution raw video, can be readily used for downstream tasks on static images.
arXiv Detail & Related papers (2021-01-16T23:44:09Z) - Demystifying Contrastive Self-Supervised Learning: Invariances,
Augmentations and Dataset Biases [34.02639091680309]
Recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class.
We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations.
Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet.
arXiv Detail & Related papers (2020-07-28T00:11:31Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.