Where are we in the search for an Artificial Visual Cortex for Embodied
Intelligence?
- URL: http://arxiv.org/abs/2303.18240v2
- Date: Thu, 1 Feb 2024 19:42:05 GMT
- Title: Where are we in the search for an Artificial Visual Cortex for Embodied
Intelligence?
- Authors: Arjun Majumdar and Karmesh Yadav and Sergio Arnaud and Yecheng Jason
Ma and Claire Chen and Sneha Silwal and Aryan Jain and Vincent-Pierre Berges
and Pieter Abbeel and Jitendra Malik and Dhruv Batra and Yixin Lin and
Oleksandr Maksymets and Aravind Rajeswaran and Franziska Meier
- Abstract summary: We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual 'foundation models' for Embodied AI.
To study the effect of pre-training data size and diversity, we combine over 4,000 hours of egocentric videos from 7 different sources.
Our largest model, named VC-1, outperforms all prior PVRs on average but does not universally dominate either.
- Score: 106.81451807227103
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present the largest and most comprehensive empirical study of pre-trained
visual representations (PVRs) or visual 'foundation models' for Embodied AI.
First, we curate CortexBench, consisting of 17 different tasks spanning
locomotion, navigation, dexterous, and mobile manipulation. Next, we
systematically evaluate existing PVRs and find that none are universally
dominant. To study the effect of pre-training data size and diversity, we
combine over 4,000 hours of egocentric videos from 7 different sources (over
4.3M images) and ImageNet to train different-sized vision transformers using
Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from
prior work, we find that scaling dataset size and diversity does not improve
performance universally (but does so on average). Our largest model, named
VC-1, outperforms all prior PVRs on average but does not universally dominate
either. Next, we show that task- or domain-specific adaptation of VC-1 leads to
substantial gains, with VC-1 (adapted) achieving competitive or superior
performance than the best known results on all of the benchmarks in
CortexBench. Finally, we present real-world hardware experiments, in which VC-1
and VC-1 (adapted) outperform the strongest pre-existing PVR. Overall, this
paper presents no new techniques but a rigorous systematic evaluation, a broad
set of findings about PVRs (that in some cases, refute those made in narrow
domains in prior work), and open-sourced code and models (that required over
10,000 GPU-hours to train) for the benefit of the research community.
Related papers
- Unleashing Large-Scale Video Generative Pre-training for Visual Robot
Manipulation [25.09113607683987]
We introduce GR-1, a GPT-style model designed for multi-task language-conditioned visual robot manipulation.
GR-1 takes as inputs a language instruction, a sequence of observation images, and a sequence of robot states.
It predicts robot actions as well as future images in an end-to-end manner.
arXiv Detail & Related papers (2023-12-20T16:00:43Z) - SeiT++: Masked Token Modeling Improves Storage-efficient Training [36.95646819348317]
Recent advancements in Deep Neural Network (DNN) models have significantly improved performance across computer vision tasks.
achieving highly generalizable and high-performing vision models requires expansive datasets, resulting in significant storage requirements.
Recent breakthrough by SeiT proposed the use of Vector-Quantized (VQ) feature vectors (i.e., tokens) as network inputs for vision classification.
In this paper, we extend SeiT by integrating Masked Token Modeling (MTM) for self-supervised pre-training.
arXiv Detail & Related papers (2023-12-15T04:11:34Z) - Early Action Recognition with Action Prototypes [62.826125870298306]
We propose a novel model that learns a prototypical representation of the full action for each class.
We decompose the video into short clips, where a visual encoder extracts features from each clip independently.
Later, a decoder aggregates together in an online fashion features from all the clips for the final class prediction.
arXiv Detail & Related papers (2023-12-11T18:31:13Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - What do we learn from a large-scale study of pre-trained visual representations in sim and real environments? [48.75469525877328]
We present a large empirical investigation on the use of pre-trained visual representations (PVRs) for training downstream policies that execute real-world tasks.
We can arrive at three insights: 1) the performance trends of PVRs in the simulation are generally indicative of their trends in the real world, 2) the use of PVRs enables a first-of-its-kind result with indoor ImageNav.
arXiv Detail & Related papers (2023-10-03T17:27:10Z) - Revisiting Classifier: Transferring Vision-Language Models for Video
Recognition [102.93524173258487]
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research.
In this study, we focus on transferring knowledge for video classification tasks.
We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning.
arXiv Detail & Related papers (2022-07-04T10:00:47Z) - ProcTHOR: Large-Scale Embodied AI Using Procedural Generation [55.485985317538194]
ProcTHOR is a framework for procedural generation of Embodied AI environments.
We demonstrate state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation.
arXiv Detail & Related papers (2022-06-14T17:09:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.