Next Patch Prediction for Autoregressive Visual Generation
- URL: http://arxiv.org/abs/2412.15321v3
- Date: Wed, 19 Mar 2025 06:16:54 GMT
- Title: Next Patch Prediction for Autoregressive Visual Generation
- Authors: Yatian Pang, Peng Jin, Shuo Yang, Bin Lin, Bin Zhu, Zhenyu Tang, Liuhan Chen, Francis E. H. Tay, Ser-Nam Lim, Harry Yang, Li Yuan,
- Abstract summary: We extend the Next Token Prediction (NTP) paradigm to a novel Next Patch Prediction (NPP) paradigm.<n>Our key idea is to group and aggregate image tokens into patch tokens with higher information density.<n>We show that NPP could reduce the training cost to around 0.6 times while improving image generation quality by up to 1.0 FID score on the ImageNet 256x256 generation benchmark.
- Score: 58.73461205369825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoregressive models, built based on the Next Token Prediction (NTP) paradigm, show great potential in developing a unified framework that integrates both language and vision tasks. Pioneering works introduce NTP to autoregressive visual generation tasks. In this work, we rethink the NTP for autoregressive image generation and extend it to a novel Next Patch Prediction (NPP) paradigm. Our key idea is to group and aggregate image tokens into patch tokens with higher information density. By using patch tokens as a more compact input sequence, the autoregressive model is trained to predict the next patch, significantly reducing computational costs. To further exploit the natural hierarchical structure of image data, we propose a multi-scale coarse-to-fine patch grouping strategy. With this strategy, the training process begins with a large patch size and ends with vanilla NTP where the patch size is 1$\times$1, thus maintaining the original inference process without modifications. Extensive experiments across a diverse range of model sizes demonstrate that NPP could reduce the training cost to around 0.6 times while improving image generation quality by up to 1.0 FID score on the ImageNet 256x256 generation benchmark. Notably, our method retains the original autoregressive model architecture without introducing additional trainable parameters or specifically designing a custom image tokenizer, offering a flexible and plug-and-play solution for enhancing autoregressive visual generation.
Related papers
- Token-Shuffle: Towards High-Resolution Image Generation with Autoregressive Models [92.18057318458528]
Token-Shuffle is a novel method that reduces the number of image tokens in Transformer.
Our strategy requires no additional pretrained text-encoder and enables MLLMs to support extremely high-resolution image synthesis.
In GenAI-benchmark, our 2.7B model achieves 0.77 overall score on hard prompts, outperforming AR models LlamaGen by 0.18 and diffusion models LDM by 0.15.
arXiv Detail & Related papers (2025-04-24T17:59:56Z) - Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More [34.12661784331014]
We study the information loss caused by patchification-based compressive encoding paradigm.
We conduct extensive patch size scaling experiments and excitedly observe an intriguing scaling law in patchification.
As a by-product, we discover that with smaller patches, task-specific decoder heads become less critical for dense prediction.
arXiv Detail & Related papers (2025-02-06T03:01:38Z) - PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation [34.528256332657406]
Finetuning-free personalized image generation can synthesize customized images without test-time finetuning.
This work proposes PatchDPO that estimates the quality of image patches within each generated image and accordingly trains the model.
Experiment results demonstrate that PatchDPO significantly improves the performance of multiple pre-trained personalized generation models.
arXiv Detail & Related papers (2024-12-04T09:59:43Z) - Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective [52.778766190479374]
Latent-based image generative models have achieved notable success in image generation tasks.
Despite sharing the same latent space, autoregressive models significantly lag behind LDMs and MIMs in image generation.
We propose a simple but effective discrete image tokenizer to stabilize the latent space for image generative modeling.
arXiv Detail & Related papers (2024-10-16T12:13:17Z) - Open-MAGVIT2: An Open-Source Project Toward Democratizing Auto-regressive Visual Generation [74.15447383432262]
The Open-MAGVIT2 project produces an open-source replication of Google's MAGVIT-v2 tokenizer.
We provide a tokenizer pre-trained on large-scale data, significantly outperforming Cosmos on zero-shot benchmarks.
We produce a family of auto-regressive image generation models ranging from 300M to 1.5B.
arXiv Detail & Related papers (2024-09-06T17:14:53Z) - Multi-Modal Parameter-Efficient Fine-tuning via Graph Neural Network [2.12696199609647]
This paper proposes a multi-modal parameter-efficient fine-tuning method based on graph networks.
The proposed model achieves test accuracies on the OxfordPets, Flowers102, and Food101 datasets that improve by 4.45%, 2.92%, and 0.23%, respectively.
arXiv Detail & Related papers (2024-08-01T05:24:20Z) - Rejuvenating image-GPT as Strong Visual Representation Learners [28.77567067712619]
This paper enhances image-GPT, one of the pioneering works that introduce autoregressive pretraining to predict the next pixels.
We shift the prediction target from raw pixels to semantic tokens, enabling a higher-level understanding of visual content.
Experiments showcase that D-iGPT excels as a strong learner of visual representations.
arXiv Detail & Related papers (2023-12-04T18:59:20Z) - Emu: Enhancing Image Generation Models Using Photogenic Needles in a
Haystack [75.00066365801993]
Training text-to-image models with web scale image-text pairs enables the generation of a wide range of visual concepts from text.
These pre-trained models often face challenges when it comes to generating highly aesthetic images.
We propose quality-tuning to guide a pre-trained model to exclusively generate highly visually appealing images.
arXiv Detail & Related papers (2023-09-27T17:30:19Z) - Query-Efficient Decision-based Black-Box Patch Attack [36.043297146652414]
We propose a differential evolutionary algorithm named DevoPatch for query-efficient decision-based patch attacks.
DevoPatch outperforms the state-of-the-art black-box patch attacks in terms of patch area and attack success rate.
We conduct the vulnerability evaluation of ViT and on image classification in the decision-based patch attack setting for the first time.
arXiv Detail & Related papers (2023-07-02T05:15:43Z) - Memory Efficient Diffusion Probabilistic Models via Patch-based
Generation [11.749564892273828]
Diffusion probabilistic models have been successful in generating high-quality and diverse images.
Traditional models, whose input and output are high-resolution images, suffer from excessive memory requirements.
We propose a patch-based approach for diffusion probabilistic models that generates images on a patch-by-patch basis.
arXiv Detail & Related papers (2023-04-14T12:20:18Z) - Centroid-centered Modeling for Efficient Vision Transformer Pre-training [44.24223088955106]
Masked Image Modeling (MIM) is a new self-supervised vision pre-training paradigm using a Vision Transformer (ViT)
Our proposed centroid-based approach, CCViT, leverages k-means clustering to obtain centroids for image modeling without supervised training of the tokenizer model.
Our approach achieves competitive results with recent baselines without external supervision and distillation training from other models.
arXiv Detail & Related papers (2023-03-08T15:34:57Z) - Scaling Autoregressive Models for Content-Rich Text-to-Image Generation [95.02406834386814]
Parti treats text-to-image generation as a sequence-to-sequence modeling problem.
Parti uses a Transformer-based image tokenizer, ViT-VQGAN, to encode images as sequences of discrete tokens.
PartiPrompts (P2) is a new holistic benchmark of over 1600 English prompts.
arXiv Detail & Related papers (2022-06-22T01:11:29Z) - Corrupted Image Modeling for Self-Supervised Visual Pre-Training [103.99311611776697]
We introduce Corrupted Image Modeling (CIM) for self-supervised visual pre-training.
CIM uses an auxiliary generator with a small trainable BEiT to corrupt the input image instead of using artificial mask tokens.
After pre-training, the enhancer can be used as a high-capacity visual encoder for downstream tasks.
arXiv Detail & Related papers (2022-02-07T17:59:04Z) - Evolving Image Compositions for Feature Representation Learning [22.22790506995431]
We propose PatchMix, a data augmentation method that creates new samples by composing patches from pairs of images in a grid-like pattern.
A ResNet-50 model trained on ImageNet using PatchMix exhibits superior transfer learning capabilities across a wide array of benchmarks.
arXiv Detail & Related papers (2021-06-16T17:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.