DPCS: Path Tracing-Based Differentiable Projector-Camera Systems
- URL: http://arxiv.org/abs/2503.12174v1
- Date: Sat, 15 Mar 2025 15:31:18 GMT
- Title: DPCS: Path Tracing-Based Differentiable Projector-Camera Systems
- Authors: Jijiang Li, Qingyue Deng, Haibin Ling, Bingyao Huang,
- Abstract summary: Projector-camera systems (ProCams) simulation aims to model the physical project-and-capture process and associated scene parameters of a ProCams.<n>Recent advances use an end-to-end neural network to learn the project-and-capture process.<n>We introduce a novel path tracing-based differentiable projector-camera systems (DPCS), offering a differentiable ProCams simulation method.
- Score: 49.69815958689441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Projector-camera systems (ProCams) simulation aims to model the physical project-and-capture process and associated scene parameters of a ProCams, and is crucial for spatial augmented reality (SAR) applications such as ProCams relighting and projector compensation. Recent advances use an end-to-end neural network to learn the project-and-capture process. However, these neural network-based methods often implicitly encapsulate scene parameters, such as surface material, gamma, and white balance in the network parameters, and are less interpretable and hard for novel scene simulation. Moreover, neural networks usually learn the indirect illumination implicitly in an image-to-image translation way which leads to poor performance in simulating complex projection effects such as soft-shadow and interreflection. In this paper, we introduce a novel path tracing-based differentiable projector-camera systems (DPCS), offering a differentiable ProCams simulation method that explicitly integrates multi-bounce path tracing. Our DPCS models the physical project-and-capture process using differentiable physically-based rendering (PBR), enabling the scene parameters to be explicitly decoupled and learned using much fewer samples. Moreover, our physically-based method not only enables high-quality downstream ProCams tasks, such as ProCams relighting and projector compensation, but also allows novel scene simulation using the learned scene parameters. In experiments, DPCS demonstrates clear advantages over previous approaches in ProCams simulation, offering better interpretability, more efficient handling of complex interreflection and shadow, and requiring fewer training samples.
Related papers
- GS-ProCams: Gaussian Splatting-based Projector-Camera Systems [49.69815958689441]
We present GS-ProCams, the first Gaussian Splatting-based framework for projector-camera systems (ProCams)<n> GS-ProCams significantly enhances the efficiency of projection mapping.<n>It is 600 times faster and uses only 1/10 of the GPU memory.
arXiv Detail & Related papers (2024-12-16T13:26:52Z) - Inverse Rendering using Multi-Bounce Path Tracing and Reservoir Sampling [17.435649250309904]
We present MIRReS, a novel two-stage inverse rendering framework.<n>Our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model.<n>Our method effectively estimates indirect illumination, including self-shadowing and internal reflections.
arXiv Detail & Related papers (2024-06-24T07:00:57Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - Neural Projection Mapping Using Reflectance Fields [11.74757574153076]
We introduce a projector into a neural reflectance field that allows to calibrate the projector and photo realistic light editing.
Our neural field consists of three neural networks, estimating geometry, material, and transmittance.
We believe that neural projection mapping opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and projection images.
arXiv Detail & Related papers (2023-06-11T05:33:10Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Active Exploration for Neural Global Illumination of Variable Scenes [6.591705508311505]
We introduce a novel Active Exploration method using Markov Chain Monte Carlo.
We apply our approach on a neural generator that learns to render novel scene instances.
Our method allows interactive rendering of hard light transport paths.
arXiv Detail & Related papers (2022-03-15T21:45:51Z) - Camera Calibration through Camera Projection Loss [4.36572039512405]
We propose a novel method to predict intrinsic (focal length and principal point offset) parameters using an image pair.
Unlike existing methods, we proposed a new representation that incorporates camera model equations as a neural network in multi-task learning framework.
Our proposed approach achieves better performance with respect to both deep learning-based and traditional methods on 7 out of 10 parameters evaluated.
arXiv Detail & Related papers (2021-10-07T14:03:10Z) - DeProCams: Simultaneous Relighting, Compensation and Shape
Reconstruction for Projector-Camera Systems [91.45207885902786]
We propose a novel end-to-end trainable model named DeProCams to learn the photometric and geometric mappings of ProCams.
DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering.
In our experiments, DeProCams shows clear advantages over previous arts with promising quality and being fully differentiable.
arXiv Detail & Related papers (2020-03-06T05:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.