Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data
- URL: http://arxiv.org/abs/2206.06577v1
- Date: Tue, 14 Jun 2022 03:38:08 GMT
- Title: Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data
- Authors: Mengyu Chu, Lingjie Liu, Quan Zheng, Erik Franz, Hans-Peter Seidel,
Christian Theobalt, Rhaleb Zayer
- Abstract summary: High-fidelity reconstruction of fluids from sparse multiview RGB videos remains a formidable challenge.
Existing solutions either assume knowledge of obstacles and lighting, or only focus on simple fluid scenes without obstacles or complex lighting.
We present the first method to reconstruct dynamic fluid by leveraging the governing physics (ie, Navier -Stokes equations) in an end-to-end optimization.
- Score: 73.8970871148949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-fidelity reconstruction of fluids from sparse multiview RGB videos
remains a formidable challenge due to the complexity of the underlying physics
as well as complex occlusion and lighting in captures. Existing solutions
either assume knowledge of obstacles and lighting, or only focus on simple
fluid scenes without obstacles or complex lighting, and thus are unsuitable for
real-world scenes with unknown lighting or arbitrary obstacles. We present the
first method to reconstruct dynamic fluid by leveraging the governing physics
(ie, Navier -Stokes equations) in an end-to-end optimization from sparse videos
without taking lighting conditions, geometry information, or boundary
conditions as input. We provide a continuous spatio-temporal scene
representation using neural networks as the ansatz of density and velocity
solution functions for fluids as well as the radiance field for static objects.
With a hybrid architecture that separates static and dynamic contents, fluid
interactions with static obstacles are reconstructed for the first time without
additional geometry input or human labeling. By augmenting time-varying neural
radiance fields with physics-informed deep learning, our method benefits from
the supervision of images and physical priors. To achieve robust optimization
from sparse views, we introduced a layer-by-layer growing strategy to
progressively increase the network capacity. Using progressively growing models
with a new regularization term, we manage to disentangle density-color
ambiguity in radiance fields without overfitting. A pretrained
density-to-velocity fluid model is leveraged in addition as the data prior to
avoid suboptimal velocity which underestimates vorticity but trivially fulfills
physical equations. Our method exhibits high-quality results with relaxed
constraints and strong flexibility on a representative set of synthetic and
real flow captures.
Related papers
- Learning Physics From Video: Unsupervised Physical Parameter Estimation for Continuous Dynamical Systems [49.11170948406405]
State-of-the-art in automatic parameter estimation from video is addressed by training supervised deep networks on large datasets.
We propose a method to estimate the physical parameters of any known, continuous governing equation from single videos.
arXiv Detail & Related papers (2024-10-02T09:44:54Z) - Physics-Informed Learning of Characteristic Trajectories for Smoke Reconstruction [17.634226193457277]
Existing physics-informed neural networks emphasize short-term physics constraints, leaving the proper preservation of long-term conservation less explored.
We introduce Neural Characteristic Trajectory Fields, a novel representation utilizing Eulerian neural fields to implicitly model Lagrangian fluid trajectories.
Building on the representation, we propose physics-informed trajectory learning and integration into NeRF-based scene reconstruction.
arXiv Detail & Related papers (2024-07-12T20:19:41Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling [43.246536947828844]
We propose a framework that factorizes time and space by formulating a scene as a composition of bandlimited, high-dimensional signals.
We demonstrate compelling results across complex dynamic scenes that involve changes in lighting, texture and long-range dynamics.
arXiv Detail & Related papers (2023-02-27T06:40:32Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Real-time Neural Radiance Caching for Path Tracing [67.46991813306708]
We present a real-time neural radiance caching method for path-traced global illumination.
Our system is designed to handle fully dynamic scenes, and makes no assumptions about the lighting, geometry, and materials.
We demonstrate significant noise reduction at the cost of little induced bias, and report state-of-the-art, real-time performance on a number of challenging scenarios.
arXiv Detail & Related papers (2021-06-23T13:09:58Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.