Water Simulation and Rendering from a Still Photograph
- URL: http://arxiv.org/abs/2210.02553v1
- Date: Wed, 5 Oct 2022 20:47:44 GMT
- Title: Water Simulation and Rendering from a Still Photograph
- Authors: Ryusuke Sugimoto, Mingming He, Jing Liao, Pedro V. Sander
- Abstract summary: We propose an approach to simulate and render realistic water animation from a single still input photograph.
Our approach creates realistic results with no user intervention for a wide variety of natural scenes.
- Score: 20.631819299595527
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose an approach to simulate and render realistic water animation from
a single still input photograph. We first segment the water surface, estimate
rendering parameters, and compute water reflection textures with a combination
of neural networks and traditional optimization techniques. Then we propose an
image-based screen space local reflection model to render the water surface
overlaid on the input image and generate real-time water animation. Our
approach creates realistic results with no user intervention for a wide variety
of natural scenes containing large bodies of water with different lighting and
water surface conditions. Since our method provides a 3D representation of the
water surface, it naturally enables direct editing of water parameters and also
supports interactive applications like adding synthetic objects to the scene.
Related papers
- AquaFuse: Waterbody Fusion for Physics Guided View Synthesis of Underwater Scenes [6.535472265307327]
We introduce the idea of AquaFuse, a physics-based method for synthesizing waterbody properties in underwater imagery.
We find that the AquaFused images preserve over 94% depth consistency and 90-95% structural similarity of the input scenes.
arXiv Detail & Related papers (2024-11-02T03:20:06Z) - Aquatic-GS: A Hybrid 3D Representation for Underwater Scenes [6.549998173302729]
We propose Aquatic-GS, a hybrid 3D representation approach for underwater scenes that effectively represents both the objects and the water medium.
Specifically, we construct a Neural Water Field (NWF) to implicitly model the water parameters, while extending the latest 3D Gaussian Splatting (3DGS) to model the objects explicitly.
Both components are integrated through a physics-based underwater image formation model to represent complex underwater scenes.
arXiv Detail & Related papers (2024-10-31T22:24:56Z) - Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Simulating Fluids in Real-World Still Images [39.93838010016248]
In this work, we tackle the problem of real-world fluid animation from a still image.
The key of our system is a surface-based layered representation deriving from video decomposition.
In addition, we introduce surface-only fluid simulation, a $2.5D$ fluid calculation version, as a replacement for motion estimation.
arXiv Detail & Related papers (2022-04-24T18:47:15Z) - Underwater Light Field Retention : Neural Rendering for Underwater
Imaging [6.22867695581195]
Underwater Image Rendering aims to generate a true-to-life underwater image from a given clean one.
We propose a neural rendering method for underwater imaging, dubbed UWNR (Underwater Neural Rendering).
arXiv Detail & Related papers (2022-03-21T14:22:05Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.