Scalable Pre-training of Large Autoregressive Image Models
- URL: http://arxiv.org/abs/2401.08541v1
- Date: Tue, 16 Jan 2024 18:03:37 GMT
- Title: Scalable Pre-training of Large Autoregressive Image Models
- Authors: Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Angel
Bautista, Alexander Toshev, Vaishaal Shankar, Joshua M Susskind, Armand
Joulin
- Abstract summary: This paper introduces AIM, a collection of vision models pre-trained with an autoregressive objective.
We highlight two key findings: (1) the performance of the visual features scale with both the model capacity and the quantity of data, and (2) the value of the objective function correlates with the performance of the model on downstream tasks.
- Score: 65.824197847617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces AIM, a collection of vision models pre-trained with an
autoregressive objective. These models are inspired by their textual
counterparts, i.e., Large Language Models (LLMs), and exhibit similar scaling
properties. Specifically, we highlight two key findings: (1) the performance of
the visual features scale with both the model capacity and the quantity of
data, (2) the value of the objective function correlates with the performance
of the model on downstream tasks. We illustrate the practical implication of
these findings by pre-training a 7 billion parameter AIM on 2 billion images,
that achieves 84.0% on ImageNet-1k with a frozen trunk. Interestingly, even at
this scale, we observe no sign of saturation in performance, suggesting that
AIM potentially represents a new frontier for training large-scale vision
models. The pre-training of AIM is similar to the pre-training of LLMs, and
does not require any image-specific strategy to stabilize the training at
scale.
Related papers
- Structuring a Training Strategy to Robustify Perception Models with Realistic Image Augmentations [1.5723316845301678]
This report introduces a novel methodology for training with augmentations to enhance model robustness and performance in such conditions.
We present a comprehensive framework that includes identifying weak spots in Machine Learning models, selecting suitable augmentations, and devising effective training strategies.
Experimental results demonstrate improvements in model performance, as measured by commonly used metrics such as mean Average Precision (mAP) and mean Intersection over Union (mIoU) on open-source object detection and semantic segmentation models and datasets.
arXiv Detail & Related papers (2024-08-30T14:15:48Z) - DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception [66.88792390480343]
We propose DEEM, a simple but effective approach that utilizes the generative feedback of diffusion models to align the semantic distributions of the image encoder.
DEEM exhibits enhanced robustness and a superior capacity to alleviate model hallucinations while utilizing fewer trainable parameters, less pre-training data, and a smaller base model size.
arXiv Detail & Related papers (2024-05-24T05:46:04Z) - Data-efficient Large Vision Models through Sequential Autoregression [58.26179273091461]
We develop an efficient, autoregression-based vision model on a limited dataset.
We demonstrate how this model achieves proficiency in a spectrum of visual tasks spanning both high-level and low-level semantic understanding.
Our empirical evaluations underscore the model's agility in adapting to various tasks, heralding a significant reduction in the parameter footprint.
arXiv Detail & Related papers (2024-02-07T13:41:53Z) - Delving Deeper into Data Scaling in Masked Image Modeling [145.36501330782357]
We conduct an empirical study on the scaling capability of masked image modeling (MIM) methods for visual recognition.
Specifically, we utilize the web-collected Coyo-700M dataset.
Our goal is to investigate how the performance changes on downstream tasks when scaling with different sizes of data and models.
arXiv Detail & Related papers (2023-05-24T15:33:46Z) - The effectiveness of MAE pre-pretraining for billion-scale pretraining [65.98338857597935]
We introduce an additional pre-pretraining stage that is simple and uses the self-supervised MAE technique to initialize the model.
We measure the effectiveness of pre-pretraining on 10 different visual recognition tasks spanning image classification, video recognition, object detection, low-shot classification and zero-shot recognition.
arXiv Detail & Related papers (2023-03-23T17:56:12Z) - Advancing Plain Vision Transformer Towards Remote Sensing Foundation
Model [97.9548609175831]
We resort to plain vision transformers with about 100 million parameters and make the first attempt to propose large vision models customized for remote sensing tasks.
Specifically, to handle the large image size and objects of various orientations in RS images, we propose a new rotated varied-size window attention.
Experiments on detection tasks demonstrate the superiority of our model over all state-of-the-art models, achieving 81.16% mAP on the DOTA-V1.0 dataset.
arXiv Detail & Related papers (2022-08-08T09:08:40Z) - On Data Scaling in Masked Image Modeling [36.00347416479826]
Masked image modeling (MIM) is suspected to be unable to benefit from larger data.
Data scales ranging from 10% of ImageNet-1K to full ImageNet-22K, model sizes ranging from 49 million to 1 billion, and training lengths ranging from 125K iterations to 500K iterations.
validation loss in pre-training is a good indicator to measure how well the model performs for fine-tuning on multiple tasks.
arXiv Detail & Related papers (2022-06-09T17:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.