ReFIR: Grounding Large Restoration Models with Retrieval Augmentation
- URL: http://arxiv.org/abs/2410.05601v1
- Date: Tue, 8 Oct 2024 01:27:45 GMT
- Title: ReFIR: Grounding Large Restoration Models with Retrieval Augmentation
- Authors: Hang Guo, Tao Dai, Zhihao Ouyang, Taolin Zhang, Yaohua Zha, Bin Chen, Shu-tao Xia,
- Abstract summary: We propose a solution called the Retrieval-augmented Framework for Image Restoration (ReFIR)
Our ReFIR incorporates retrieved images as external knowledge to extend the knowledge boundary of existing LRMs.
Our experiments demonstrate that ReFIR can achieve not only high-fidelity but also realistic restoration results.
- Score: 52.00990126884406
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in diffusion-based Large Restoration Models (LRMs) have significantly improved photo-realistic image restoration by leveraging the internal knowledge embedded within model weights. However, existing LRMs often suffer from the hallucination dilemma, i.e., producing incorrect contents or textures when dealing with severe degradations, due to their heavy reliance on limited internal knowledge. In this paper, we propose an orthogonal solution called the Retrieval-augmented Framework for Image Restoration (ReFIR), which incorporates retrieved images as external knowledge to extend the knowledge boundary of existing LRMs in generating details faithful to the original scene. Specifically, we first introduce the nearest neighbor lookup to retrieve content-relevant high-quality images as reference, after which we propose the cross-image injection to modify existing LRMs to utilize high-quality textures from retrieved images. Thanks to the additional external knowledge, our ReFIR can well handle the hallucination challenge and facilitate faithfully results. Extensive experiments demonstrate that ReFIR can achieve not only high-fidelity but also realistic restoration results. Importantly, our ReFIR requires no training and is adaptable to various LRMs.
Related papers
- Boosting HDR Image Reconstruction via Semantic Knowledge Transfer [45.738735520776004]
Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions.
These priors are typically extracted from sRGB Standard Dynamic Range (SDR) images.
We propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction.
arXiv Detail & Related papers (2025-03-19T16:01:27Z) - Decouple to Reconstruct: High Quality UHD Restoration via Active Feature Disentanglement and Reversible Fusion [77.08942160610478]
Ultra-high-definition (UHD) image restoration often faces computational bottlenecks and information loss due to its extremely high resolution.
We propose a Controlled Differential Disentangled VAE that discards easily recoverable background information while encoding more difficult-to-recover degraded information into latent space.
Our method effectively alleviates the information loss problem in VAE models while ensuring computational efficiency, significantly improving the quality of UHD image restoration, and achieves state-of-the-art results in six UHD restoration tasks with only 1M parameters.
arXiv Detail & Related papers (2025-03-17T02:55:18Z) - RAP-SR: RestorAtion Prior Enhancement in Diffusion Models for Realistic Image Super-Resolution [36.137383171027615]
We introduce RAP-SR, a restoration prior enhancement approach in pretrained diffusion models for Real-SR.
First, we develop the High-Fidelity Aesthetic Image dataset (HFAID), curated through a Quality-Driven Aesthetic Image Selection Pipeline (QDAISP)
Second, we propose the Restoration Priors Enhancement Framework, which includes Restoration Priors Refinement (RPR) and Restoration-Oriented Prompt Optimization (ROPO) modules.
arXiv Detail & Related papers (2024-12-10T03:17:38Z) - Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - SSP-IR: Semantic and Structure Priors for Diffusion-based Realistic Image Restoration [20.873676111265656]
SSP-IR aims to fully exploit semantic and structure priors from low-quality images.
Our method outperforms other state-of-the-art methods overall on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-07-04T04:55:14Z) - MeshLRM: Large Reconstruction Model for High-Quality Mesh [52.71164862539288]
MeshLRM can reconstruct a high-quality mesh from merely four input images in less than one second.
Our approach achieves state-of-the-art mesh reconstruction from sparse-view inputs and also allows for many downstream applications.
arXiv Detail & Related papers (2024-04-18T17:59:41Z) - Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models [14.25759541950917]
This work leverages a capable vision-language model and a synthetic degradation pipeline to learn image restoration in the wild (wild IR)
Our base diffusion model is the image restoration SDE (IR-SDE)
arXiv Detail & Related papers (2024-04-15T12:34:21Z) - Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild [57.06779516541574]
SUPIR (Scaling-UP Image Restoration) is a groundbreaking image restoration method that harnesses generative prior and the power of model scaling up.
We collect a dataset comprising 20 million high-resolution, high-quality images for model training, each enriched with descriptive text annotations.
arXiv Detail & Related papers (2024-01-24T17:58:07Z) - SPIRE: Semantic Prompt-Driven Image Restoration [66.26165625929747]
We develop SPIRE, a Semantic and restoration Prompt-driven Image Restoration framework.
Our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength.
Our experiments demonstrate the superior restoration performance of SPIRE compared to the state of the arts.
arXiv Detail & Related papers (2023-12-18T17:02:30Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [70.46245698746874]
We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks.
DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image content.
In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results.
For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details.
arXiv Detail & Related papers (2023-08-29T07:11:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.