AquaFuse: Waterbody Fusion for Physics Guided View Synthesis of Underwater Scenes
- URL: http://arxiv.org/abs/2411.01119v1
- Date: Sat, 02 Nov 2024 03:20:06 GMT
- Title: AquaFuse: Waterbody Fusion for Physics Guided View Synthesis of Underwater Scenes
- Authors: Md Abu Bakr Siddique, Jiayi Wu, Ioannis Rekleitis, Md Jahidul Islam,
- Abstract summary: We introduce the idea of AquaFuse, a physics-based method for synthesizing waterbody properties in underwater imagery.
We find that the AquaFused images preserve over 94% depth consistency and 90-95% structural similarity of the input scenes.
- Score: 6.535472265307327
- License:
- Abstract: We introduce the idea of AquaFuse, a physics-based method for synthesizing waterbody properties in underwater imagery. We formulate a closed-form solution for waterbody fusion that facilitates realistic data augmentation and geometrically consistent underwater scene rendering. AquaFuse leverages the physical characteristics of light propagation underwater to synthesize the waterbody from one scene to the object contents of another. Unlike data-driven style transfer, AquaFuse preserves the depth consistency and object geometry in an input scene. We validate this unique feature by comprehensive experiments over diverse underwater scenes. We find that the AquaFused images preserve over 94% depth consistency and 90-95% structural similarity of the input scenes. We also demonstrate that it generates accurate 3D view synthesis by preserving object geometry while adapting to the inherent waterbody fusion process. AquaFuse opens up a new research direction in data augmentation by geometry-preserving style transfer for underwater imaging and robot vision applications.
Related papers
- Aquatic-GS: A Hybrid 3D Representation for Underwater Scenes [6.549998173302729]
We propose Aquatic-GS, a hybrid 3D representation approach for underwater scenes that effectively represents both the objects and the water medium.
Specifically, we construct a Neural Water Field (NWF) to implicitly model the water parameters, while extending the latest 3D Gaussian Splatting (3DGS) to model the objects explicitly.
Both components are integrated through a physics-based underwater image formation model to represent complex underwater scenes.
arXiv Detail & Related papers (2024-10-31T22:24:56Z) - UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion [30.122666238416716]
We propose a novel pipeline for generating underwater images using accurate terrestrial depth data.
This approach facilitates the training of supervised models for underwater depth estimation.
We introduce a unique Depth2Underwater ControlNet, trained on specially prepared Underwater, Depth, Text data triplets.
arXiv Detail & Related papers (2023-12-19T08:56:33Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent
Geometry and Texture [80.0643976406225]
We propose "RoomDreamer", which leverages powerful natural language to synthesize a new room with a different style.
Our work addresses the challenge of synthesizing both geometry and texture aligned to the input scene structure and prompt simultaneously.
To validate the proposed method, real indoor scenes scanned with smartphones are used for extensive experiments.
arXiv Detail & Related papers (2023-05-18T22:57:57Z) - Water Simulation and Rendering from a Still Photograph [20.631819299595527]
We propose an approach to simulate and render realistic water animation from a single still input photograph.
Our approach creates realistic results with no user intervention for a wide variety of natural scenes.
arXiv Detail & Related papers (2022-10-05T20:47:44Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Semantic View Synthesis [56.47999473206778]
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
arXiv Detail & Related papers (2020-08-24T17:59:46Z) - Deep Sea Robotic Imaging Simulator [6.2122699483618]
The largest portion of the ocean - the deep sea - still remains mostly unexplored.
Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community.
This paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs.
arXiv Detail & Related papers (2020-06-27T16:18:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.