StyleTRF: Stylizing Tensorial Radiance Fields
- URL: http://arxiv.org/abs/2212.09330v1
- Date: Mon, 19 Dec 2022 09:50:05 GMT
- Title: StyleTRF: Stylizing Tensorial Radiance Fields
- Authors: Rahul Goel, Sirikonda Dhawal, Saurabh Saini, P. J. Narayanan
- Abstract summary: Stylized view generation of scenes captured casually using a camera has received much attention recently.
We present StyleTRF, a compact, quick-to-optimize strategy for stylized view generation using TensoRF.
- Score: 7.9020917073764405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stylized view generation of scenes captured casually using a camera has
received much attention recently. The geometry and appearance of the scene are
typically captured as neural point sets or neural radiance fields in the
previous work. An image stylization method is used to stylize the captured
appearance by training its network jointly or iteratively with the structure
capture network. The state-of-the-art SNeRF method trains the NeRF and
stylization network in an alternating manner. These methods have high training
time and require joint optimization. In this work, we present StyleTRF, a
compact, quick-to-optimize strategy for stylized view generation using TensoRF.
The appearance part is fine-tuned using sparse stylized priors of a few views
rendered using the TensoRF representation for a few iterations. Our method thus
effectively decouples style-adaption from view capture and is much faster than
the previous methods. We show state-of-the-art results on several scenes used
for this purpose.
Related papers
- Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Locally Stylized Neural Radiance Fields [30.037649804991315]
We propose a stylization framework for neural radiance fields (NeRF) based on local style transfer.
In particular, we use a hash-grid encoding to learn the embedding of the appearance and geometry components.
We show that our method yields plausible stylization results with novel view synthesis.
arXiv Detail & Related papers (2023-09-19T15:08:10Z) - NeRF-Art: Text-Driven Neural Radiance Fields Stylization [38.3724634394761]
We present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt.
We show that our method is effective and robust regarding both single-view stylization quality and cross-view consistency.
arXiv Detail & Related papers (2022-12-15T18:59:58Z) - StyleTime: Style Transfer for Synthetic Time Series Generation [10.457423272041332]
We introduce the concept of stylized features for time series, which is directly related to the time series realism properties.
We propose a novel stylization algorithm, called StyleTime, that uses explicit feature extraction techniques to combine the underlying content (trend) of one time series with the style (distributional properties) of another.
arXiv Detail & Related papers (2022-09-22T20:42:19Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - Location-Free Camouflage Generation Network [82.74353843283407]
Camouflage is a common visual phenomenon, which refers to hiding the foreground objects into the background images, making them briefly invisible to the human eye.
This paper proposes a novel Location-free Camouflage Generation Network (LCG-Net) that fuse high-level features of foreground and background image, and generate result by one inference.
Experiments show that our method has results as satisfactory as state-of-the-art in the single-appearance regions and are less likely to be completely invisible, but far exceed the quality of the state-of-the-art in the multi-appearance regions.
arXiv Detail & Related papers (2022-03-18T10:33:40Z) - Stylizing 3D Scene via Implicit Representation and HyperNetwork [34.22448260525455]
A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
arXiv Detail & Related papers (2021-05-27T09:11:30Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.