Rethinking the Objectives of Vector-Quantized Tokenizers for Image
Synthesis
- URL: http://arxiv.org/abs/2212.03185v1
- Date: Tue, 6 Dec 2022 17:58:38 GMT
- Title: Rethinking the Objectives of Vector-Quantized Tokenizers for Image
Synthesis
- Authors: Yuchao Gu, Xintao Wang, Yixiao Ge, Ying Shan, Xiaohu Qie, Mike Zheng
Shou
- Abstract summary: We show that improving the reconstruction fidelity of VQ tokenizers does not necessarily improve the generation ability of generative transformers.
We propose Semantic-Quantized GAN (SeQ-GAN) with two learning phases to balance the two objectives.
Our SeQ-GAN (364M) achieves Frechet Inception Distance (FID) of 6.25 and Inception Score (IS) of 140.9 on 256x256 ImageNet generation.
- Score: 30.654501418221475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vector-Quantized (VQ-based) generative models usually consist of two basic
components, i.e., VQ tokenizers and generative transformers. Prior research
focuses on improving the reconstruction fidelity of VQ tokenizers but rarely
examines how the improvement in reconstruction affects the generation ability
of generative transformers. In this paper, we surprisingly find that improving
the reconstruction fidelity of VQ tokenizers does not necessarily improve the
generation. Instead, learning to compress semantic features within VQ
tokenizers significantly improves generative transformers' ability to capture
textures and structures. We thus highlight two competing objectives of VQ
tokenizers for image synthesis: semantic compression and details preservation.
Different from previous work that only pursues better details preservation, we
propose Semantic-Quantized GAN (SeQ-GAN) with two learning phases to balance
the two objectives. In the first phase, we propose a semantic-enhanced
perceptual loss for better semantic compression. In the second phase, we fix
the encoder and codebook, but enhance and finetune the decoder to achieve
better details preservation. The proposed SeQ-GAN greatly improves VQ-based
generative models and surpasses the GAN and Diffusion Models on both
unconditional and conditional image generation. Our SeQ-GAN (364M) achieves
Frechet Inception Distance (FID) of 6.25 and Inception Score (IS) of 140.9 on
256x256 ImageNet generation, a remarkable improvement over VIT-VQGAN (714M),
which obtains 11.2 FID and 97.2 IS.
Related papers
- Factorized Visual Tokenization and Generation [37.56136469262736]
We introduce Factorized Quantization (FQ), a novel approach that revitalizes VQ-based tokenizers by decomposing a large codebook into multiple independent sub-codebooks.
This factorization reduces the lookup complexity of large codebooks, enabling more efficient and scalable visual tokenization.
Experiments show that the proposed FQGAN model substantially improves the reconstruction quality of visual tokenizers, achieving state-of-the-art performance.
arXiv Detail & Related papers (2024-11-25T18:59:53Z) - Image Understanding Makes for A Good Tokenizer for Image Generation [62.875788091204626]
We introduce a token-based IG framework, which relies on effective tokenizers to project images into token sequences.
We show that tokenizers with strong IU capabilities achieve superior IG performance across a variety of metrics, datasets, tasks, and proposal networks.
arXiv Detail & Related papers (2024-11-07T03:55:23Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - ConvNeXt-ChARM: ConvNeXt-based Transform for Efficient Neural Image
Compression [18.05997169440533]
We propose ConvNeXt-ChARM, an efficient ConvNeXt-based transform coding framework, paired with a compute-efficient channel-wise auto-regressive auto-regressive.
We show that ConvNeXt-ChARM brings consistent and significant BD-rate (PSNR) reductions estimated on average to 5.24% and 1.22% over the versatile video coding (VVC) reference encoder (VTM-18.0) and the state-of-the-art learned image compression method SwinT-ChARM.
arXiv Detail & Related papers (2023-07-12T11:45:54Z) - E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language
Understanding and Generation [95.49128988683191]
Sequence-to-sequence (seq2seq) learning is a popular fashion for large-scale pretraining language models.
We propose an encoding-enhanced seq2seq pretraining strategy, namely E2S2.
E2S2 improves the seq2seq models via integrating more efficient self-supervised information into the encoders.
arXiv Detail & Related papers (2022-05-30T08:25:36Z) - Lossless Acceleration for Seq2seq Generation with Aggressive Decoding [74.12096349944497]
Aggressive Decoding is a novel decoding algorithm for seq2seq generation.
Our approach aims to yield identical (or better) generation compared with autoregressive decoding.
We test Aggressive Decoding on the most popular 6-layer Transformer model on GPU in multiple seq2seq tasks.
arXiv Detail & Related papers (2022-05-20T17:59:00Z) - VQFR: Blind Face Restoration with Vector-Quantized Dictionary and
Parallel Decoder [83.63843671885716]
We propose a VQ-based face restoration method -- VQFR.
VQFR takes advantage of high-quality low-level feature banks extracted from high-quality faces.
To further fuse low-level features from inputs while not "contaminating" the realistic details generated from the VQ codebook, we proposed a parallel decoder.
arXiv Detail & Related papers (2022-05-13T17:54:40Z) - Vector-quantized Image Modeling with Improved VQGAN [93.8443646643864]
We propose a Vector-quantized Image Modeling approach that involves pretraining a Transformer to predict image tokens autoregressively.
We first propose multiple improvements over vanilla VQGAN from architecture to codebook learning, yielding better efficiency and reconstruction fidelity.
When trained on ImageNet at 256x256 resolution, we achieve Inception Score (IS) of 175.1 and Frechet Inception Distance (FID) of 4.17, a dramatic improvement over the vanilla VQGAN.
arXiv Detail & Related papers (2021-10-09T18:36:00Z) - Hierarchical Quantized Autoencoders [3.9146761527401432]
We motivate the use of a hierarchy of Vector Quantized Variencoders (VQ-VAEs) to attain high factors of compression.
We show that a combination of quantization and hierarchical latent structure aids likelihood-based image compression.
Our resulting scheme produces a Markovian series of latent variables that reconstruct images of high-perceptual quality.
arXiv Detail & Related papers (2020-02-19T11:26:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.