RenderFlow: Single-Step Neural Rendering via Flow Matching
- URL: http://arxiv.org/abs/2601.06928v1
- Date: Sun, 11 Jan 2026 14:28:46 GMT
- Title: RenderFlow: Single-Step Neural Rendering via Flow Matching
- Authors: Shenghao Zhang, Runtao Liu, Christopher Schroers, Yang Zhang,
- Abstract summary: We present a novel end-to-end, deterministic, single-step neural rendering framework, RenderFlow, built upon a flow matching paradigm.<n>Our method significantly accelerates rendering process and enhances both the physical plausibility and overall visual quality of the output.<n>The resulting pipeline achieves near real-time performance with photorealistic rendering quality, effectively bridging the gap between the efficiency of modern generative models and the precision of traditional physically based rendering.
- Score: 17.56739408578129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional physically based rendering (PBR) pipelines generate photorealistic images through computationally intensive light transport simulations. Although recent deep learning approaches leverage diffusion model priors with geometry buffers (G-buffers) to produce visually compelling results without explicit scene geometry or light simulation, they remain constrained by two major limitations. First, the iterative nature of the diffusion process introduces substantial latency. Second, the inherent stochasticity of these generative models compromises physical accuracy and temporal consistency. In response to these challenges, we propose a novel, end-to-end, deterministic, single-step neural rendering framework, RenderFlow, built upon a flow matching paradigm. To further strengthen both rendering quality and generalization, we propose an efficient and effective module for sparse keyframe guidance. Our method significantly accelerates the rendering process and, by optionally incorporating sparsely rendered keyframes as guidance, enhances both the physical plausibility and overall visual quality of the output. The resulting pipeline achieves near real-time performance with photorealistic rendering quality, effectively bridging the gap between the efficiency of modern generative models and the precision of traditional physically based rendering. Furthermore, we demonstrate the versatility of our framework by introducing a lightweight, adapter-based module that efficiently repurposes the pretrained forward model for the inverse rendering task of intrinsic decomposition.
Related papers
- DiffusionHarmonizer: Bridging Neural Reconstruction and Photorealistic Simulation with Online Diffusion Enhancer [62.18680935878919]
We introduce DiffusionHarmonizer, an online generative enhancement framework that transforms renderings into temporally consistent outputs.<n>At its core is a single-step temporally-conditioned enhancer capable of running in online simulators on a single GPU.
arXiv Detail & Related papers (2026-02-27T15:35:30Z) - A Convolutional Neural Deferred Shader for Physics Based Rendering [9.933770503395117]
Recent advances in neural rendering have achieved impressive results on photorealistic shading and relighting.<n>This paper introduces pbnds+: a novel physics-based neural deferred shading pipeline utilizing convolution neural networks to decrease the parameters.
arXiv Detail & Related papers (2025-12-22T16:16:13Z) - FLOWING: Implicit Neural Flows for Structure-Preserving Morphing [5.498230316788923]
FLOWING (FLOW morphING) is a framework that recasts warping as the construction of a differential vector flow.<n>We show that FLOWING achieves state-of-the-art morphing quality with faster convergence.
arXiv Detail & Related papers (2025-10-10T16:50:23Z) - FideDiff: Efficient Diffusion Model for High-Fidelity Image Motion Deblurring [33.809728459395785]
We introduce FideDiff, a novel single-step diffusion model designed for high-fidelity deblurring.<n>We reformulate motion deblurring as a diffusion-like process where each timestep represents a progressively blurred image.<n>By reconstructing training data with matched blur trajectories, the model learns temporal consistency, enabling accurate one-step deblurring.
arXiv Detail & Related papers (2025-10-02T03:44:45Z) - Improving Progressive Generation with Decomposable Flow Matching [50.63174319509629]
Decomposable Flow Matching (DFM) is a simple and effective framework for the progressive generation of visual media.<n>On Imagenet-1k 512px, DFM achieves 35.2% improvements in FDD scores over the base architecture and 26.4% over the best-performing baseline.
arXiv Detail & Related papers (2025-06-24T17:58:02Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.<n>To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.<n>Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models [83.28670336340608]
We introduce DiffusionRenderer, a neural approach that addresses the dual problem of inverse and forward rendering.<n>Our model enables practical applications from a single video input--including relighting, material editing, and realistic object insertion.
arXiv Detail & Related papers (2025-01-30T18:59:11Z) - Uni-Renderer: Unifying Rendering and Inverse Rendering Via Dual Stream Diffusion [14.779121995147056]
Rendering and inverse rendering are pivotal tasks in computer vision and graphics.<n>We propose a data-driven method that jointly models rendering and inverse rendering as two conditional generation tasks.<n>We will open-source our training and inference code to the public, fostering further research and development in this area.
arXiv Detail & Related papers (2024-12-19T16:57:45Z) - Disentangled Motion Modeling for Video Frame Interpolation [40.83962594702387]
Video Frame Interpolation (VFI) aims to synthesize intermediate frames between existing frames to enhance visual smoothness and quality.<n>We introduce disentangled Motion Modeling (MoMo), a diffusion-based approach for VFI that enhances visual quality by focusing on intermediate motion modeling.
arXiv Detail & Related papers (2024-06-25T03:50:20Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.