S-DyRF: Reference-Based Stylized Radiance Fields for Dynamic Scenes
- URL: http://arxiv.org/abs/2403.06205v3
- Date: Fri, 22 Mar 2024 14:05:33 GMT
- Title: S-DyRF: Reference-Based Stylized Radiance Fields for Dynamic Scenes
- Authors: Xingyi Li, Zhiguo Cao, Yizheng Wu, Kewei Wang, Ke Xian, Zhe Wang, Guosheng Lin,
- Abstract summary: Current 3D stylization methods often assume static scenes, which violates the dynamic nature of our real world.
We present S-DyRF, a reference-based temporal stylization method for dynamic neural fields.
Experiments on both synthetic and real-world datasets demonstrate that our method yields plausible stylized results.
- Score: 58.05447927353328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current 3D stylization methods often assume static scenes, which violates the dynamic nature of our real world. To address this limitation, we present S-DyRF, a reference-based spatio-temporal stylization method for dynamic neural radiance fields. However, stylizing dynamic 3D scenes is inherently challenging due to the limited availability of stylized reference images along the temporal axis. Our key insight lies in introducing additional temporal cues besides the provided reference. To this end, we generate temporal pseudo-references from the given stylized reference. These pseudo-references facilitate the propagation of style information from the reference to the entire dynamic 3D scene. For coarse style transfer, we enforce novel views and times to mimic the style details present in pseudo-references at the feature level. To preserve high-frequency details, we create a collection of stylized temporal pseudo-rays from temporal pseudo-references. These pseudo-rays serve as detailed and explicit stylization guidance for achieving fine style transfer. Experiments on both synthetic and real-world datasets demonstrate that our method yields plausible stylized results of space-time view synthesis on dynamic 3D scenes.
Related papers
- GAST: Sequential Gaussian Avatars with Hierarchical Spatio-temporal Context [7.6736633105043515]
3D human avatars, through the use of canonical radiance fields and per-frame observed warping, enable high-fidelity rendering and animating.
Existing methods, which rely on either spatial SMPL(-X) poses or temporal embeddings, respectively suffer from coarse quality or limited animation flexibility.
We propose GAST, a framework that unifies 3D human modeling with 3DGS by hierarchically integrating both spatial and temporal information.
arXiv Detail & Related papers (2024-11-25T04:05:19Z) - Reference-based Controllable Scene Stylization with Gaussian Splatting [30.321151430263946]
Referenced-based scene stylization that edits the appearance based on a content-aligned reference image is an emerging research area.
We propose ReGS, which adapts 3D Gaussian Splatting (3DGS) for reference-based stylization to enable real-time stylized view synthesis.
arXiv Detail & Related papers (2024-07-09T20:30:29Z) - StylizedGS: Controllable Stylization for 3D Gaussian Splatting [53.0225128090909]
StylizedGS is an efficient 3D neural style transfer framework with adaptable control over perceptual factors.
Our method achieves high-quality stylization results characterized by faithful brushstrokes and geometric consistency with flexible controls.
arXiv Detail & Related papers (2024-04-08T06:32:11Z) - ConRF: Zero-shot Stylization of 3D Scenes with Conditioned Radiation
Fields [26.833265073162696]
We introduce ConRF, a novel method of zero-shot stylization.
We employ a conversion process that maps the CLIP feature space to the style space of a pre-trained VGG network.
We also use a 3D volumetric representation to perform local style transfer.
arXiv Detail & Related papers (2024-02-02T23:12:16Z) - GaussianStyle: Gaussian Head Avatar via StyleGAN [64.85782838199427]
We propose a novel framework that integrates the volumetric strengths of 3DGS with the powerful implicit representation of StyleGAN.
We show that our method achieves state-of-the-art performance in reenactment, novel view synthesis, and animation.
arXiv Detail & Related papers (2024-02-01T18:14:42Z) - Towards 4D Human Video Stylization [56.33756124829298]
We present a first step towards 4D (3D and time) human video stylization, which addresses style transfer, novel view synthesis and human animation.
We leverage Neural Radiance Fields (NeRFs) to represent videos, conducting stylization in the rendered feature space.
Our framework uniquely extends its capabilities to accommodate novel poses and viewpoints, making it a versatile tool for creative human video stylization.
arXiv Detail & Related papers (2023-12-07T08:58:33Z) - S2RF: Semantically Stylized Radiance Fields [1.243080988483032]
We present our method for transferring style from any arbitrary image(s) to object(s) within a 3D scene.
Our primary objective is to offer more control in 3D scene stylization, facilitating the creation of customizable and stylized scene images from arbitrary viewpoints.
arXiv Detail & Related papers (2023-09-03T19:32:49Z) - SpOT: Spatiotemporal Modeling for 3D Object Tracking [68.12017780034044]
3D multi-object tracking aims to consistently identify all mobile time.
Current 3D tracking methods rely on abstracted information and limited history.
We develop a holistic representation of scenes that leverage both spatial and temporal information.
arXiv Detail & Related papers (2022-07-12T21:45:49Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.