RecTok: Reconstruction Distillation along Rectified Flow
- URL: http://arxiv.org/abs/2512.13421v2
- Date: Wed, 17 Dec 2025 07:11:53 GMT
- Title: RecTok: Reconstruction Distillation along Rectified Flow
- Authors: Qingyu Shi, Size Wu, Jinbin Bai, Kaidong Yu, Yujing Wang, Yunhai Tong, Xiangtai Li, Xuelong Li,
- Abstract summary: We propose RecTok, which overcomes the limitations of high-dimensional visual tokenizers through two key innovations.<n>Our method distills the semantic information in VFMs into the forward flow trajectories in flow matching.<n>Our RecTok achieves superior image reconstruction, generation quality, and discriminative performance.
- Score: 85.51292475005151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual tokenizers play a crucial role in diffusion models. The dimensionality of latent space governs both reconstruction fidelity and the semantic expressiveness of the latent feature. However, a fundamental trade-off is inherent between dimensionality and generation quality, constraining existing methods to low-dimensional latent spaces. Although recent works have leveraged vision foundation models to enrich the semantics of visual tokenizers and accelerate convergence, high-dimensional tokenizers still underperform their low-dimensional counterparts. In this work, we propose RecTok, which overcomes the limitations of high-dimensional visual tokenizers through two key innovations: flow semantic distillation and reconstruction--alignment distillation. Our key insight is to make the forward flow in flow matching semantically rich, which serves as the training space of diffusion transformers, rather than focusing on the latent space as in previous works. Specifically, our method distills the semantic information in VFMs into the forward flow trajectories in flow matching. And we further enhance the semantics by introducing a masked feature reconstruction loss. Our RecTok achieves superior image reconstruction, generation quality, and discriminative performance. It achieves state-of-the-art results on the gFID-50K under both with and without classifier-free guidance settings, while maintaining a semantically rich latent space structure. Furthermore, as the latent dimensionality increases, we observe consistent improvements. Code and model are available at https://shi-qingyu.github.io/rectok.github.io.
Related papers
- DINO-SAE: DINO Spherical Autoencoder for High-Fidelity Image Reconstruction and Generation [47.409626500688866]
We present the DINO Spherical Autoencoder (DINO-SAE), a framework that bridges semantic representation and pixel-level reconstruction.<n>Our approach achieves state-of-the-art reconstruction quality, reaching 0.37 rFID and 26.2 dB PSNR, while maintaining strong semantic alignment to the pretrained VFM.
arXiv Detail & Related papers (2026-01-30T12:25:34Z) - VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction [83.50898344094153]
VQRAE produces Continuous semantic features for image understanding and tokens for visual generation within a unified tokenizer.<n>Design enables negligible semantic information for maintaining the ability of multimodal understanding, discrete tokens.<n>VQRAE presents competitive performance on several benchmarks of visual understanding, generation and reconstruction.
arXiv Detail & Related papers (2025-11-28T17:26:34Z) - Latent Diffusion Model without Variational Autoencoder [78.34722551463223]
SVG is a novel latent diffusion model without variational autoencoders for visual generation.<n>It constructs a feature space with clear semantic discriminability by leveraging frozen DINO features.<n>It enables accelerated diffusion training, supports few-step sampling, and improves generative quality.
arXiv Detail & Related papers (2025-10-17T04:17:44Z) - Diffusion Counterfactuals for Image Regressors [1.534667887016089]
We present two methods to create counterfactual explanations for image regression tasks using diffusion-based generative models.<n>Both produce realistic, semantic, and smooth counterfactuals on CelebA-HQ and a synthetic data set.<n>We find that for regression counterfactuals, changes in features depend on the region of the predicted value.
arXiv Detail & Related papers (2025-03-26T14:42:46Z) - CAM-Seg: A Continuous-valued Embedding Approach for Semantic Image Generation [11.170848285659572]
Autoencoder accuracy on segmentation mask using quantized embeddings is 8% lower than continuous-valued embeddings.<n>We propose a continuous-valued embedding framework for semantic segmentation.<n>Our approach eliminates the need for discrete latent representations while preserving fine-grained semantic details.
arXiv Detail & Related papers (2025-03-19T18:06:54Z) - Exploring Representation-Aligned Latent Space for Better Generation [86.45670422239317]
We introduce ReaLS, which integrates semantic priors to improve generation performance.<n>We show that fundamental DiT and SiT trained on ReaLS can achieve a 15% improvement in FID metric.<n>The enhanced semantic latent space enables more perceptual downstream tasks, such as segmentation and depth estimation.
arXiv Detail & Related papers (2025-02-01T07:42:12Z) - CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - Diffusion Models already have a Semantic Latent Space [7.638042073679074]
We propose asymmetric reverse process (Asyrp) which discovers the semantic latent space in frozen pretrained diffusion models.
Our semantic latent space, named h-space, has nice properties for accommodating semantic image manipulation.
In addition, we introduce a principled design of the generative process for versatile editing and quality boost ing by quantifiable measures.
arXiv Detail & Related papers (2022-10-20T02:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.