MC-JEPA: A Joint-Embedding Predictive Architecture for Self-Supervised
Learning of Motion and Content Features
- URL: http://arxiv.org/abs/2307.12698v1
- Date: Mon, 24 Jul 2023 11:27:14 GMT
- Title: MC-JEPA: A Joint-Embedding Predictive Architecture for Self-Supervised
Learning of Motion and Content Features
- Authors: Adrien Bardes, Jean Ponce, Yann LeCun
- Abstract summary: We introduce MC-JEPA, a joint-embedding predictive architecture and self-supervised learning approach to jointly learn optical flow and content features within a shared encoder.
The proposed approach achieves performance on-par with existing unsupervised optical flow benchmarks.
- Score: 34.92750644059916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning of visual representations has been focusing on
learning content features, which do not capture object motion or location, and
focus on identifying and differentiating objects in images and videos. On the
other hand, optical flow estimation is a task that does not involve
understanding the content of the images on which it is estimated. We unify the
two approaches and introduce MC-JEPA, a joint-embedding predictive architecture
and self-supervised learning approach to jointly learn optical flow and content
features within a shared encoder, demonstrating that the two associated
objectives; the optical flow estimation objective and the self-supervised
learning objective; benefit from each other and thus learn content features
that incorporate motion information. The proposed approach achieves performance
on-par with existing unsupervised optical flow benchmarks, as well as with
common self-supervised learning approaches on downstream tasks such as semantic
segmentation of images and videos.
Related papers
- PooDLe: Pooled and dense self-supervised learning from naturalistic videos [32.656425302538835]
We propose a novel approach that combines an invariance-based SSL objective on pooled representations with a dense SSL objective.
We validate our approach on the BDD100K driving video dataset and the Walking Tours first-person video dataset.
arXiv Detail & Related papers (2024-08-20T21:40:48Z) - LOCATE: Self-supervised Object Discovery via Flow-guided Graph-cut and
Bootstrapped Self-training [13.985488693082981]
We propose a self-supervised object discovery approach that leverages motion and appearance information to produce high-quality object segmentation masks.
We demonstrate the effectiveness of our approach, named LOCATE, on multiple standard video object segmentation, image saliency detection, and object segmentation benchmarks.
arXiv Detail & Related papers (2023-08-22T07:27:09Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - Flow-guided Semi-supervised Video Object Segmentation [14.357395825753827]
We propose an optical flow-guided approach for semi-supervised video object segmentation.
A model to extract the combined information from optical flow and the image is proposed.
Experiments on DAVIS 2017 and YouTube-VOS 2019 show that by integrating the information extracted from optical flow into the original image branch results in a strong performance gain.
arXiv Detail & Related papers (2023-01-25T10:02:31Z) - USegScene: Unsupervised Learning of Depth, Optical Flow and Ego-Motion
with Semantic Guidance and Coupled Networks [31.600708674008384]
USegScene is a framework for semantically guided unsupervised learning of depth, optical flow and ego-motion estimation for stereo camera images.
We present results on the popular KITTI dataset and show that our approach outperforms other methods by a large margin.
arXiv Detail & Related papers (2022-07-15T13:25:47Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - Self-Supervised Video Representation Learning with Motion-Contrastive
Perception [13.860736711747284]
Motion-Contrastive Perception Network (MCPNet)
MCPNet consists of two branches, namely, Motion Information Perception (MIP) and Contrastive Instance Perception (CIP)
Our method outperforms current state-of-the-art visual-only self-supervised approaches.
arXiv Detail & Related papers (2022-04-10T05:34:46Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.