DNF-Intrinsic: Deterministic Noise-Free Diffusion for Indoor Inverse Rendering
- URL: http://arxiv.org/abs/2507.03924v2
- Date: Mon, 14 Jul 2025 05:11:47 GMT
- Title: DNF-Intrinsic: Deterministic Noise-Free Diffusion for Indoor Inverse Rendering
- Authors: Rongjia Zheng, Qing Zhang, Chengjiang Long, Wei-Shi Zheng,
- Abstract summary: We present DNF-Intrinsic, a robust yet efficient inverse rendering approach fine-tuned from pre-trained diffusion model.<n>We show that our method clearly outperforms existing state-of-the-art rendering methods.
- Score: 46.94209097951204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent methods have shown that pre-trained diffusion models can be fine-tuned to enable generative inverse rendering by learning image-conditioned noise-to-intrinsic mapping. Despite their remarkable progress, they struggle to robustly produce high-quality results as the noise-to-intrinsic paradigm essentially utilizes noisy images with deteriorated structure and appearance for intrinsic prediction, while it is common knowledge that structure and appearance information in an image are crucial for inverse rendering. To address this issue, we present DNF-Intrinsic, a robust yet efficient inverse rendering approach fine-tuned from a pre-trained diffusion model, where we propose to take the source image rather than Gaussian noise as input to directly predict deterministic intrinsic properties via flow matching. Moreover, we design a generative renderer to constrain that the predicted intrinsic properties are physically faithful to the source image. Experiments on both synthetic and real-world datasets show that our method clearly outperforms existing state-of-the-art methods.
Related papers
- Diffusion Priors for Variational Likelihood Estimation and Image Denoising [10.548018200066858]
We propose adaptive likelihood estimation and MAP inference during the reverse diffusion process to tackle real-world noise.
Experiments and analyses on diverse real-world datasets demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-10-23T02:52:53Z) - Edge-preserving noise for diffusion models [4.435514696080208]
We present a novel edge-preserving diffusion model that generalizes over existing isotropic models.<n>We show that our model's generative process converges faster to results that more closely match the target distribution.<n>Our edge-preserving diffusion process consistently outperforms state-of-the-art baselines in unconditional image generation.
arXiv Detail & Related papers (2024-10-02T13:29:52Z) - NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation [86.7260950382448]
We propose a novel approach to correct noise for image validity, NoiseDiffusion.
NoiseDiffusion performs within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.
arXiv Detail & Related papers (2024-03-13T12:32:25Z) - Diffusion Noise Feature: Accurate and Fast Generated Image Detection [28.262273539251172]
Generative models have reached an advanced stage where they can produce remarkably realistic images.
Existing image detectors for generated images encounter challenges such as low accuracy and limited generalization.
This paper seeks to address this issue by seeking a representation with strong generalization capabilities to enhance the detection of generated images.
arXiv Detail & Related papers (2023-12-05T10:01:11Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - Image Embedding for Denoising Generative Models [0.0]
We focus on Denoising Diffusion Implicit Models due to the deterministic nature of their reverse diffusion process.
As a side result of our investigation, we gain a deeper insight into the structure of the latent space of diffusion models.
arXiv Detail & Related papers (2022-12-30T17:56:07Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.