GAS-NeRF: Geometry-Aware Stylization of Dynamic Radiance Fields
- URL: http://arxiv.org/abs/2503.08483v1
- Date: Tue, 11 Mar 2025 14:37:06 GMT
- Title: GAS-NeRF: Geometry-Aware Stylization of Dynamic Radiance Fields
- Authors: Nhat Phuong Anh Vu, Abhishek Saroha, Or Litany, Daniel Cremers,
- Abstract summary: GAS-NeRF is a novel approach for joint appearance and geometry stylization in dynamic Radiance Fields.<n>Our method leverages depth maps to extract and transfer geometric details into the radiance field, followed by appearance transfer.
- Score: 46.11580764915792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current 3D stylization techniques primarily focus on static scenes, while our world is inherently dynamic, filled with moving objects and changing environments. Existing style transfer methods primarily target appearance -- such as color and texture transformation -- but often neglect the geometric characteristics of the style image, which are crucial for achieving a complete and coherent stylization effect. To overcome these shortcomings, we propose GAS-NeRF, a novel approach for joint appearance and geometry stylization in dynamic Radiance Fields. Our method leverages depth maps to extract and transfer geometric details into the radiance field, followed by appearance transfer. Experimental results on synthetic and real-world datasets demonstrate that our approach significantly enhances the stylization quality while maintaining temporal coherence in dynamic scenes.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes [7.590932716513324]
We present SpectroMotion, a novel approach that combines 3D Gaussian Splatting (3DGS) with physically-based rendering (PBR) and deformation fields to reconstruct dynamic specular scenes.<n>It is the only existing 3DGS method capable of synthesizing real-world dynamic specular scenes, outperforming state-of-the-art methods in rendering complex, dynamic, and specular scenes.
arXiv Detail & Related papers (2024-10-22T17:59:56Z) - StylizedGS: Controllable Stylization for 3D Gaussian Splatting [53.0225128090909]
StylizedGS is an efficient 3D neural style transfer framework with adaptable control over perceptual factors.
Our method achieves high-quality stylization results characterized by faithful brushstrokes and geometric consistency with flexible controls.
arXiv Detail & Related papers (2024-04-08T06:32:11Z) - S-DyRF: Reference-Based Stylized Radiance Fields for Dynamic Scenes [58.05447927353328]
Current 3D stylization methods often assume static scenes, which violates the dynamic nature of our real world.
We present S-DyRF, a reference-based temporal stylization method for dynamic neural fields.
Experiments on both synthetic and real-world datasets demonstrate that our method yields plausible stylized results.
arXiv Detail & Related papers (2024-03-10T13:04:01Z) - Geometry Transfer for Stylizing Radiance Fields [54.771563955208705]
We introduce Geometry Transfer, a novel method that leverages geometric deformation for 3D style transfer.
Our experiments show that Geometry Transfer enables a broader and more expressive range of stylizations.
arXiv Detail & Related papers (2024-02-01T18:58:44Z) - Dynamic Point Fields [30.029872787758705]
We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
arXiv Detail & Related papers (2023-04-05T17:52:37Z) - NeRF-Art: Text-Driven Neural Radiance Fields Stylization [38.3724634394761]
We present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt.
We show that our method is effective and robust regarding both single-view stylization quality and cross-view consistency.
arXiv Detail & Related papers (2022-12-15T18:59:58Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.