Fine-structure Preserved Real-world Image Super-resolution via Transfer VAE Training
- URL: http://arxiv.org/abs/2507.20291v1
- Date: Sun, 27 Jul 2025 14:11:29 GMT
- Title: Fine-structure Preserved Real-world Image Super-resolution via Transfer VAE Training
- Authors: Qiaosi Yi, Shuai Li, Rongyuan Wu, Lingchen Sun, Yuhui Wu, Lei Zhang,
- Abstract summary: We propose a Transfer VAE Training (TVT) strategy to transfer the 8$times$ downsampled VAE into a 4$times$ one while adapting to the pre-trained UNet.<n>TVT strategy aligns the new encoder-decoder pair with the original VAE latent space while enhancing image fine details.
- Score: 14.058527210122831
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Impressive results on real-world image super-resolution (Real-ISR) have been achieved by employing pre-trained stable diffusion (SD) models. However, one critical issue of such methods lies in their poor reconstruction of image fine structures, such as small characters and textures, due to the aggressive resolution reduction of the VAE (eg., 8$\times$ downsampling) in the SD model. One solution is to employ a VAE with a lower downsampling rate for diffusion; however, adapting its latent features with the pre-trained UNet while mitigating the increased computational cost poses new challenges. To address these issues, we propose a Transfer VAE Training (TVT) strategy to transfer the 8$\times$ downsampled VAE into a 4$\times$ one while adapting to the pre-trained UNet. Specifically, we first train a 4$\times$ decoder based on the output features of the original VAE encoder, then train a 4$\times$ encoder while keeping the newly trained decoder fixed. Such a TVT strategy aligns the new encoder-decoder pair with the original VAE latent space while enhancing image fine details. Additionally, we introduce a compact VAE and compute-efficient UNet by optimizing their network architectures, reducing the computational cost while capturing high-resolution fine-scale features. Experimental results demonstrate that our TVT method significantly improves fine-structure preservation, which is often compromised by other SD-based methods, while requiring fewer FLOPs than state-of-the-art one-step diffusion models. The official code can be found at https://github.com/Joyies/TVT.
Related papers
- Steering One-Step Diffusion Model with Fidelity-Rich Decoder for Fast Image Compression [36.10674664089876]
SODEC is a novel single-step diffusion-based image compression model.<n>It improves fidelity resulting from over-reliance on generative priors.<n>It significantly outperforms existing methods, achieving superior rate-distortion-perception performance.
arXiv Detail & Related papers (2025-08-07T02:24:03Z) - Rapid Salient Object Detection with Difference Convolutional Neural Networks [49.838283141381716]
This paper addresses the challenge of deploying salient object detection (SOD) on resource-constrained devices with real-time performance.<n>We propose an efficient network design that combines traditional wisdom on SOD and the representation power of modern CNNs.
arXiv Detail & Related papers (2025-07-01T20:41:05Z) - Rethinking Video Tokenization: A Conditioned Diffusion-based Approach [58.164354605550194]
New tokenizer, Diffusion Conditioned-based Gene Tokenizer, replaces GAN-based decoder with conditional diffusion model.<n>We trained using only a basic MSE diffusion loss for reconstruction, along with KL term and LPIPS perceptual loss from scratch.<n>Even a scaled-down version of CDT (3$times inference speedup) still performs comparably with top baselines.
arXiv Detail & Related papers (2025-03-05T17:59:19Z) - Adversarial Diffusion Compression for Real-World Image Super-Resolution [16.496532580598007]
Real-world image super-resolution aims to reconstruct high-resolution images from degraded low-resolution inputs.<n>One-step diffusion networks like OSEDiff and S3Diff alleviate this issue but still incur high computational costs.<n>This paper proposes a novel Real-ISR method, AdcSR, by distilling the one-step diffusion network OSEDiff into a streamlined diffusion-GAN model.
arXiv Detail & Related papers (2024-11-20T15:13:36Z) - Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget [53.311109531586844]
We demonstrate very low-cost training of large-scale T2I diffusion transformer models.
We train a 1.16 billion parameter sparse transformer with only $1,890 economical cost and achieve a 12.7 FID in zero-shot generation.
We aim to release our end-to-end training pipeline to further democratize the training of large-scale diffusion models on micro-budgets.
arXiv Detail & Related papers (2024-07-22T17:23:28Z) - Hierarchical Patch Diffusion Models for High-Resolution Video Generation [50.42746357450949]
We develop deep context fusion, which propagates context information from low-scale to high-scale patches in a hierarchical manner.
We also propose adaptive computation, which allocates more network capacity and computation towards coarse image details.
The resulting model sets a new state-of-the-art FVD score of 66.32 and Inception Score of 87.68 in class-conditional video generation.
arXiv Detail & Related papers (2024-06-12T01:12:53Z) - PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher [55.22994720855957]
PaGoDA is a novel pipeline that reduces the training costs through three stages: training diffusion on downsampled data, distilling the pretrained diffusion, and progressive super-resolution.
With the proposed pipeline, PaGoDA achieves a $64times$ reduced cost in training its diffusion model on 8x downsampled data.
PaGoDA's pipeline can be applied directly in the latent space, adding compression alongside the pre-trained autoencoder in Latent Diffusion Models.
arXiv Detail & Related papers (2024-05-23T17:39:09Z) - Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior [8.772652777234315]
We propose a novel two-stage extreme image compression framework that exploits the powerful generative capability of pre-trained diffusion models.
Our method significantly outperforms state-of-the-art approaches in terms of visual performance at extremely lows.
arXiv Detail & Related papers (2024-04-29T16:02:38Z) - ACT-Diffusion: Efficient Adversarial Consistency Training for One-step Diffusion Models [59.90959789767886]
We show that optimizing consistency training loss minimizes the Wasserstein distance between target and generated distributions.
By incorporating a discriminator into the consistency training framework, our method achieves improved FID scores on CIFAR10 and ImageNet 64$times$64 and LSUN Cat 256$times$256 datasets.
arXiv Detail & Related papers (2023-11-23T16:49:06Z) - Data-Free Dynamic Compression of CNNs for Tractable Efficiency [46.498278084317704]
structured pruning approaches have shown promise in lowering floating-point operations without substantial drops in accuracy.<n>We propose HASTE (Hashing for Tractable Efficiency), a data-free, plug-and-play convolution module that instantly reduces a network's test-time inference cost without training or fine-tuning.<n>We demonstrate our approach on the popular vision benchmarks CIFAR-10 and ImageNet, where we achieve a 46.72% reduction in FLOPs with only a 1.25% loss in accuracy.
arXiv Detail & Related papers (2023-09-29T13:09:40Z) - Patch Diffusion: Faster and More Data-Efficient Training of Diffusion
Models [166.64847903649598]
We propose Patch Diffusion, a generic patch-wise training framework.
Patch Diffusion significantly reduces the training time costs while improving data efficiency.
We achieve outstanding FID scores in line with state-of-the-art benchmarks.
arXiv Detail & Related papers (2023-04-25T02:35:54Z) - Asymmetric Learned Image Compression with Multi-Scale Residual Block,
Importance Map, and Post-Quantization Filtering [15.056672221375104]
Deep learning-based image compression has achieved better ratedistortion (R-D) performance than the latest traditional method, H.266/VVC.
Many leading learned schemes cannot maintain a good trade-off between performance and complexity.
We propose an effcient and effective image coding framework, which achieves similar R-D performance with lower complexity than the state of the art.
arXiv Detail & Related papers (2022-06-21T09:34:29Z) - Robust Single-Image Super-Resolution via CNNs and TV-TV Minimization [7.538482310185135]
Single-image super-resolution is the process of increasing the resolution of an image, obtaining a high-resolution (HR) image from a low-resolution (LR) one.
By leveraging large training datasets, convolutional neural networks (CNNs) currently achieve the state-of-the-art performance in this task.
We propose to post-process the CNN outputs with an optimization problem that we call TV-TV minimization, which enforces consistency.
arXiv Detail & Related papers (2020-04-02T07:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.