Feature Splatting for Better Novel View Synthesis with Low Overlap
- URL: http://arxiv.org/abs/2405.15518v2
- Date: Mon, 30 Sep 2024 09:32:09 GMT
- Title: Feature Splatting for Better Novel View Synthesis with Low Overlap
- Authors: T. Berriel Martins, Javier Civera,
- Abstract summary: 3D Splatting has emerged as a very promising scene representation, achieving state-of-the-art quality in novel view synthesis.
We propose to encode the color information of 3D Gaussians into per-Gaussian feature vectors, which we denote as Feature Splatting.
- Score: 9.192660643226372
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D Gaussian Splatting has emerged as a very promising scene representation, achieving state-of-the-art quality in novel view synthesis significantly faster than competing alternatives. However, its use of spherical harmonics to represent scene colors limits the expressivity of 3D Gaussians and, as a consequence, the capability of the representation to generalize as we move away from the training views. In this paper, we propose to encode the color information of 3D Gaussians into per-Gaussian feature vectors, which we denote as Feature Splatting (FeatSplat). To synthesize a novel view, Gaussians are first "splatted" into the image plane, then the corresponding feature vectors are alpha-blended, and finally the blended vector is decoded by a small MLP to render the RGB pixel values. To further inform the model, we concatenate a camera embedding to the blended feature vector, to condition the decoding also on the viewpoint information. Our experiments show that these novel model for encoding the radiance considerably improves novel view synthesis for low overlap views that are distant from the training views. Finally, we also show the capacity and convenience of our feature vector representation, demonstrating its capability not only to generate RGB values for novel views, but also their per-pixel semantic labels. Code available at https://github.com/tberriel/FeatSplat . Keywords: Gaussian Splatting, Novel View Synthesis, Feature Splatting
Related papers
- Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding [86.55824709875598]
We propose a joint enhancement framework for 3D semantic Gaussian modeling that synergizes both semantic and rendering branches.<n>Unlike conventional point cloud shape encoding, we introduce an anisotropic 3D Gaussian Chebyshev descriptor to capture fine-grained 3D shape details.<n>We employ a cross-scene knowledge transfer module to continuously update learned shape patterns, enabling faster convergence and robust representations.
arXiv Detail & Related papers (2026-01-05T18:33:50Z) - C3G: Learning Compact 3D Representations with 2K Gaussians [55.04010158339562]
Recent approaches use per-pixel 3D Gaussian Splatting for reconstruction, followed by a 2D-to-3D feature lifting stage for scene understanding.<n>We propose C3G, a novel feed-forward framework that estimates compact 3D Gaussians only at essential spatial locations.
arXiv Detail & Related papers (2025-12-03T17:59:05Z) - IBGS: Image-Based Gaussian Splatting [17.07390802338029]
3D Gaussian Splatting (3DGS) has recently emerged as a fast, high-quality method for novel view synthesis (NVS)<n>We propose Image-Based Gaussian Splatting, an efficient alternative that leverages high-resolution source images for fine details and view-specific color modeling.<n> Experiments on standard NVS benchmarks show that our method significantly outperforms prior Gaussian Splatting approaches in rendering quality, without increasing the storage footprint.
arXiv Detail & Related papers (2025-11-18T11:03:27Z) - Triangle Splatting+: Differentiable Rendering with Opaque Triangles [54.18495204764292]
We introduce Triangle Splatting+, which directly optimize triangles within a differentiable splatting framework.<n>Our method surpasses prior splatting approaches in visual fidelity while remaining efficient and fast to training.<n>The resulting semi-connected meshes support downstream applications such as physics-based simulation or interactive walkthroughs.
arXiv Detail & Related papers (2025-09-29T17:43:46Z) - ${C}^{3}$-GS: Learning Context-aware, Cross-dimension, Cross-scale Feature for Generalizable Gaussian Splatting [16.868578618340262]
Generalizable Gaussian Splatting aims to synthesize novel views for unseen scenes without per-scene optimization.<n>We propose $mathbfC3$-GS, a framework that enhances feature learning by incorporating context-aware, cross-dimension, and cross-scale constraints.<n>Our architecture integrates three lightweight modules into a unified rendering pipeline, improving feature fusion and enabling synthesis without requiring additional supervision.
arXiv Detail & Related papers (2025-08-28T13:12:18Z) - AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views [57.13066710710485]
AnySplat is a feed forward network for novel view synthesis from uncalibrated image collections.<n>A single forward pass yields a set of 3D Gaussian primitives encoding both scene geometry and appearance.<n>In extensive zero shot evaluations, AnySplat matches the quality of pose aware baselines in both sparse and dense view scenarios.
arXiv Detail & Related papers (2025-05-29T17:49:56Z) - Occam's LGS: An Efficient Approach for Language Gaussian Splatting [57.00354758206751]
We show that the complicated pipelines for language 3D Gaussian Splatting are simply unnecessary.
We apply Occam's razor to the task at hand, leading to a highly efficient weighted multi-view feature aggregation technique.
arXiv Detail & Related papers (2024-12-02T18:50:37Z) - Few-shot Novel View Synthesis using Depth Aware 3D Gaussian Splatting [0.0]
3D Gaussian splatting has surpassed neural radiance field methods in novel view synthesis.
It produces a high-quality rendering with a lot of input views, but its performance drops significantly when only a few views are available.
We propose a depth-aware Gaussian splatting method for few-shot novel view synthesis.
arXiv Detail & Related papers (2024-10-14T20:42:30Z) - HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction [46.269350101349715]
HiSplat is a novel framework for generalizable 3D Gaussian Splatting.
It generates hierarchical 3D Gaussians via a coarse-to-fine strategy.
It significantly enhances reconstruction quality and cross-dataset generalization.
arXiv Detail & Related papers (2024-10-08T17:59:32Z) - Splatt3R: Zero-shot Gaussian Splatting from Uncalibrated Image Pairs [29.669534899109028]
We introduce Splatt3R, a pose-free, feed-forward method for in-the-wild 3D reconstruction and novel view synthesis from stereo pairs.
Given uncalibrated natural images, Splatt3R can predict 3D Gaussian Splats without requiring any camera parameters or depth information.
Splatt3R can reconstruct scenes at 4FPS at 512 x 512 resolution, and the resultant splats can be rendered in real-time.
arXiv Detail & Related papers (2024-08-25T18:27:20Z) - FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes [50.534213038479926]
FreeSplat is capable of reconstructing geometrically consistent 3D scenes from long sequence input towards free-view synthesis.
We propose a simple but effective free-view training strategy that ensures robust view synthesis across broader view range regardless of the number of views.
arXiv Detail & Related papers (2024-05-28T08:40:14Z) - DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading [50.331929164207324]
We introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view and editing tasks.
arXiv Detail & Related papers (2024-04-15T01:58:54Z) - StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting [141.05924680451804]
StyleGaussian is a novel 3D style transfer technique.
It allows instant transfer of any image's style to a 3D scene at 10 frames per second (fps)
arXiv Detail & Related papers (2024-03-12T16:44:52Z) - SplatMesh: Interactive 3D Segmentation and Editing Using Mesh-Based Gaussian Splatting [86.50200613220674]
A key challenge in 3D-based interactive editing is the absence of an efficient representation that balances diverse modifications with high-quality view synthesis under a given memory constraint.
We introduce SplatMesh, a novel fine-grained interactive 3D segmentation and editing algorithm that integrates 3D Gaussian Splatting with a precomputed mesh.
By segmenting and editing the simplified mesh, we can effectively edit the Gaussian splats as well, which will lead to extensive experiments on real and synthetic datasets.
arXiv Detail & Related papers (2023-12-26T02:50:42Z) - Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis [0.552480439325792]
High-fidelity scene reconstruction with an optimized 3D Gaussian splat representation has been introduced for novel view synthesis from sparse image sets.
We propose a compressed 3D Gaussian splat representation that utilizes sensitivity-aware vector clustering with quantization-aware training to compress directional colors and Gaussian parameters.
arXiv Detail & Related papers (2023-11-17T14:40:43Z) - Real-Time Radiance Fields for Single-Image Portrait View Synthesis [85.32826349697972]
We present a one-shot method to infer and render a 3D representation from a single unposed image in real-time.
Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering.
Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.
arXiv Detail & Related papers (2023-05-03T17:56:01Z) - One-Shot Neural Fields for 3D Object Understanding [112.32255680399399]
We present a unified and compact scene representation for robotics.
Each object in the scene is depicted by a latent code capturing geometry and appearance.
This representation can be decoded for various tasks such as novel view rendering, 3D reconstruction, and stable grasp prediction.
arXiv Detail & Related papers (2022-10-21T17:33:14Z) - Unified Implicit Neural Stylization [80.59831861186227]
This work explores a new intriguing direction: training a stylized implicit representation.
We conduct a pilot study on a variety of implicit functions, including 2D coordinate-based representation, neural radiance field, and signed distance function.
Our solution is a Unified Implicit Neural Stylization framework, dubbed INS.
arXiv Detail & Related papers (2022-04-05T02:37:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.