Water Simulation and Rendering from a Still Photograph
- URL: http://arxiv.org/abs/2210.02553v1
- Date: Wed, 5 Oct 2022 20:47:44 GMT
- Title: Water Simulation and Rendering from a Still Photograph
- Authors: Ryusuke Sugimoto, Mingming He, Jing Liao, Pedro V. Sander
- Abstract summary: We propose an approach to simulate and render realistic water animation from a single still input photograph.
Our approach creates realistic results with no user intervention for a wide variety of natural scenes.
- Score: 20.631819299595527
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose an approach to simulate and render realistic water animation from
a single still input photograph. We first segment the water surface, estimate
rendering parameters, and compute water reflection textures with a combination
of neural networks and traditional optimization techniques. Then we propose an
image-based screen space local reflection model to render the water surface
overlaid on the input image and generate real-time water animation. Our
approach creates realistic results with no user intervention for a wide variety
of natural scenes containing large bodies of water with different lighting and
water surface conditions. Since our method provides a 3D representation of the
water surface, it naturally enables direct editing of water parameters and also
supports interactive applications like adding synthetic objects to the scene.
Related papers
- DerainNeRF: 3D Scene Estimation with Adhesive Waterdrop Removal [12.099886168325012]
We propose a method to reconstruct the clear 3D scene implicitly from multi-view images degraded by waterdrops.
Our method exploits an attention network to predict the location of waterdrops and then train a Neural Radiance Fields to recover the 3D scene implicitly.
By leveraging the strong scene representation capabilities of NeRF, our method can render high-quality novel-view images with waterdrops removed.
arXiv Detail & Related papers (2024-03-29T06:58:57Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - Simulating Fluids in Real-World Still Images [39.93838010016248]
In this work, we tackle the problem of real-world fluid animation from a still image.
The key of our system is a surface-based layered representation deriving from video decomposition.
In addition, we introduce surface-only fluid simulation, a $2.5D$ fluid calculation version, as a replacement for motion estimation.
arXiv Detail & Related papers (2022-04-24T18:47:15Z) - Underwater Light Field Retention : Neural Rendering for Underwater
Imaging [6.22867695581195]
Underwater Image Rendering aims to generate a true-to-life underwater image from a given clean one.
We propose a neural rendering method for underwater imaging, dubbed UWNR (Underwater Neural Rendering).
arXiv Detail & Related papers (2022-03-21T14:22:05Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.