CoSSegGaussians: Compact and Swift Scene Segmenting 3D Gaussians with
Dual Feature Fusion
- URL: http://arxiv.org/abs/2401.05925v3
- Date: Tue, 30 Jan 2024 12:46:04 GMT
- Title: CoSSegGaussians: Compact and Swift Scene Segmenting 3D Gaussians with
Dual Feature Fusion
- Authors: Bin Dou, Tianyu Zhang, Yongjia Ma, Zhaohui Wang, Zejian Yuan
- Abstract summary: We propose a method for compact 3D-consistent scene segmentation at fast rendering speed with only RGB images input.
Our model outperforms baselines on both semantic and panoptic zero-shot segmentation task.
- Score: 17.778755539808547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Compact and Swift Segmenting 3D Gaussians(CoSSegGaussians), a
method for compact 3D-consistent scene segmentation at fast rendering speed
with only RGB images input. Previous NeRF-based segmentation methods have
relied on time-consuming neural scene optimization. While recent 3D Gaussian
Splatting has notably improved speed, existing Gaussian-based segmentation
methods struggle to produce compact masks, especially in zero-shot
segmentation. This issue probably stems from their straightforward assignment
of learnable parameters to each Gaussian, resulting in a lack of robustness
against cross-view inconsistent 2D machine-generated labels. Our method aims to
address this problem by employing Dual Feature Fusion Network as Gaussians'
segmentation field. Specifically, we first optimize 3D Gaussians under RGB
supervision. After Gaussian Locating, DINO features extracted from images are
applied through explicit unprojection, which are further incorporated with
spatial features from the efficient point cloud processing network. Feature
aggregation is utilized to fuse them in a global-to-local strategy for compact
segmentation features. Experimental results show that our model outperforms
baselines on both semantic and panoptic zero-shot segmentation task, meanwhile
consumes less than 10% inference time compared to NeRF-based methods. Code and
more results will be available at https://David-Dou.github.io/CoSSegGaussians
Related papers
- GaussianCross: Cross-modal Self-supervised 3D Representation Learning via Gaussian Splatting [16.179607149692398]
We present GaussianCross, a novel cross-modal self-supervised 3D representation learning architecture.<n>GaussianCross seamlessly converts scale-inconsistent 3D point clouds into a unified cuboid-normalized Gaussian representation.<n>It achieves superior performance through linear probing (0.1% parameters) and limited data training (1% of scenes) compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-08-04T08:12:44Z) - FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting [57.97160965244424]
3D Gaussian splatting (3DGS) has enabled various applications in 3D scene representation and novel view synthesis.<n>Previous approaches have focused on pruning less important Gaussians, effectively compressing 3DGS.<n>We present an elastic inference method for 3DGS, achieving substantial rendering performance without additional fine-tuning.
arXiv Detail & Related papers (2025-06-04T17:17:57Z) - ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes [81.48624894781257]
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.
Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.
We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
arXiv Detail & Related papers (2025-03-21T18:55:14Z) - Rethinking End-to-End 2D to 3D Scene Segmentation in Gaussian Splatting [86.15347226865826]
We design a new end-to-end object-aware lifting approach, named Unified-Lift.
We augment each Gaussian point with an additional Gaussian-level feature learned using a contrastive loss to encode instance information.
We conduct experiments on three benchmarks: LERF-Masked, Replica, and Messy Rooms.
arXiv Detail & Related papers (2025-03-18T08:42:23Z) - TSGaussian: Semantic and Depth-Guided Target-Specific Gaussian Splatting from Sparse Views [18.050257821756148]
TSGaussian is a novel framework that combines semantic constraints with depth priors to avoid geometry degradation in novel view synthesis tasks.
Our approach prioritizes computational resources on designated targets while minimizing background allocation.
Extensive experiments demonstrate that TSGaussian outperforms state-of-the-art methods on three standard datasets.
arXiv Detail & Related papers (2024-12-13T11:26:38Z) - ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [104.34751911174196]
We build a large-scale dataset of 3DGS using ShapeNet and ModelNet datasets.
Our dataset ShapeSplat consists of 65K objects from 87 unique categories.
We introduce textbftextitGaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian Splats [20.833116566243408]
In this paper, we exploit Gaussian as a 3D representation for 3D GANs by leveraging its efficient and explicit characteristics.
We introduce a generator architecture with a hierarchical multi-scale Gaussian representation that effectively regularizes the position and scale of generated Gaussians.
Experimental results demonstrate that ours achieves a significantly faster rendering speed (x100) compared to state-of-the-art 3D consistent GANs.
arXiv Detail & Related papers (2024-06-05T05:52:20Z) - GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction [70.65250036489128]
3D semantic occupancy prediction aims to obtain 3D fine-grained geometry and semantics of the surrounding scene.
We propose an object-centric representation to describe 3D scenes with sparse 3D semantic Gaussians.
GaussianFormer achieves comparable performance with state-of-the-art methods with only 17.8% - 24.8% of their memory consumption.
arXiv Detail & Related papers (2024-05-27T17:59:51Z) - GaussianObject: High-Quality 3D Object Reconstruction from Four Views with Gaussian Splatting [82.29476781526752]
Reconstructing and rendering 3D objects from highly sparse views is of critical importance for promoting applications of 3D vision techniques.
GaussianObject is a framework to represent and render the 3D object with Gaussian splatting that achieves high rendering quality with only 4 input images.
GaussianObject is evaluated on several challenging datasets, including MipNeRF360, OmniObject3D, OpenIllumination, and our-collected unposed images.
arXiv Detail & Related papers (2024-02-15T18:42:33Z) - SAGD: Boundary-Enhanced Segment Anything in 3D Gaussian via Gaussian Decomposition [66.80822249039235]
3D Gaussian Splatting has emerged as an alternative 3D representation for novel view synthesis.
We propose SAGD, a conceptually simple yet effective boundary-enhanced segmentation pipeline for 3D-GS.
Our approach achieves high-quality 3D segmentation without rough boundary issues, which can be easily applied to other scene editing tasks.
arXiv Detail & Related papers (2024-01-31T14:19:03Z) - Sparse-view CT Reconstruction with 3D Gaussian Volumetric Representation [13.667470059238607]
Sparse-view CT is a promising strategy for reducing the radiation dose of traditional CT scans.
Recently, 3D Gaussian has been applied to model complex natural scenes.
We investigate their potential for sparse-view CT reconstruction.
arXiv Detail & Related papers (2023-12-25T09:47:33Z) - Segment Any 3D Gaussians [85.93694310363325]
This paper presents SAGA, a highly efficient 3D promptable segmentation method based on 3D Gaussian Splatting (3D-GS)
Given 2D visual prompts as input, SAGA can segment the corresponding 3D target represented by 3D Gaussians within 4 ms.
We show that SAGA achieves real-time multi-granularity segmentation with quality comparable to state-of-the-art methods.
arXiv Detail & Related papers (2023-12-01T17:15:24Z) - Gaussian Grouping: Segment and Edit Anything in 3D Scenes [65.49196142146292]
We propose Gaussian Grouping, which extends Gaussian Splatting to jointly reconstruct and segment anything in open-world 3D scenes.
Compared to the implicit NeRF representation, we show that the grouped 3D Gaussians can reconstruct, segment and edit anything in 3D with high visual quality, fine granularity and efficiency.
arXiv Detail & Related papers (2023-12-01T17:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.