G2P: Gaussian-to-Point Attribute Alignment for Boundary-Aware 3D Semantic Segmentation
- URL: http://arxiv.org/abs/2601.03510v1
- Date: Wed, 07 Jan 2026 01:44:29 GMT
- Title: G2P: Gaussian-to-Point Attribute Alignment for Boundary-Aware 3D Semantic Segmentation
- Authors: Hojun Song, Chae-yeong Song, Jeong-hun Hong, Chaewon Moon, Dong-hwi Kim, Gahyeon Kim, Soo Ye Kim, Yiyi Liao, Jaehyup Lee, Sang-hyo Park,
- Abstract summary: We propose Gaussian-to-Point (G2P), which transfers appearance-aware attributes from 3D Gaussian Splatting to point clouds for more discriminative and appearance-consistent segmentation.<n>Our approach achieves superior performance on standard benchmarks and shows significant improvements on geometrically challenging classes, all without any 2D or language supervision.
- Score: 22.750868489199686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation on point clouds is critical for 3D scene understanding. However, sparse and irregular point distributions provide limited appearance evidence, making geometry-only features insufficient to distinguish objects with similar shapes but distinct appearances (e.g., color, texture, material). We propose Gaussian-to-Point (G2P), which transfers appearance-aware attributes from 3D Gaussian Splatting to point clouds for more discriminative and appearance-consistent segmentation. Our G2P address the misalignment between optimized Gaussians and original point geometry by establishing point-wise correspondences. By leveraging Gaussian opacity attributes, we resolve the geometric ambiguity that limits existing models. Additionally, Gaussian scale attributes enable precise boundary localization in complex 3D scenes. Extensive experiments demonstrate that our approach achieves superior performance on standard benchmarks and shows significant improvements on geometrically challenging classes, all without any 2D or language supervision.
Related papers
- Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding [86.55824709875598]
We propose a joint enhancement framework for 3D semantic Gaussian modeling that synergizes both semantic and rendering branches.<n>Unlike conventional point cloud shape encoding, we introduce an anisotropic 3D Gaussian Chebyshev descriptor to capture fine-grained 3D shape details.<n>We employ a cross-scene knowledge transfer module to continuously update learned shape patterns, enabling faster convergence and robust representations.
arXiv Detail & Related papers (2026-01-05T18:33:50Z) - GVSynergy-Det: Synergistic Gaussian-Voxel Representations for Multi-View 3D Object Detection [18.809986709717446]
Image-based 3D object detection aims to identify and localize objects in 3D space using only RGB images.<n>Existing image-based approaches face two critical challenges: methods achieving high accuracy typically require dense 3D supervision.<n>We present GVSynergy-Det, a novel framework that enhances 3D detection through synergistic Gaussian-Voxel representation learning.
arXiv Detail & Related papers (2025-12-29T03:34:39Z) - GeoPurify: A Data-Efficient Geometric Distillation Framework for Open-Vocabulary 3D Segmentation [57.8059956428009]
Recent attempts to transfer features from 2D Vision-Language Models to 3D semantic segmentation expose a persistent trade-off.<n>We propose GeoPurify that applies a small Student Affinity Network to 2D VLM-generated 3D point features using geometric priors distilled from a 3D self-supervised teacher model.<n>Benefiting from latent geometric information and the learned affinity network, GeoPurify effectively mitigates the trade-off and achieves superior data efficiency.
arXiv Detail & Related papers (2025-10-02T16:37:56Z) - GaussianMorphing: Mesh-Guided 3D Gaussians for Semantic-Aware Object Morphing [31.47036497656664]
We introduce a novel framework for semantic-aware 3D shape and texture morphing from multi-view images.<n>The core of our framework is a unified deformation strategy that anchors 3DGaussians to reconstructed mesh patches.<n>On our proposed TexMorph benchmark, GaussianMorphing substantially outperforms prior 2D/3D methods.
arXiv Detail & Related papers (2025-10-02T14:07:55Z) - Trace3D: Consistent Segmentation Lifting via Gaussian Instance Tracing [27.24794829116753]
We address the challenge of lifting 2D visual segmentation to 3D in Gaussian Splatting.<n>Existing methods often suffer from inconsistent 2D masks across viewpoints and produce noisy segmentation boundaries.<n>We introduce Gaussian Instance Tracing (GIT), which augments the standard Gaussian representation with an instance weight matrix across input views.
arXiv Detail & Related papers (2025-08-05T08:54:17Z) - C-DOG: Multi-View Multi-instance Feature Association Using Connected δ-Overlap Graphs [4.576442835703357]
We introduce C-DOG (Connected delta-Overlap Graph), an algorithm for robust geometrical feature association.<n>In a C-DOG graph, two nodes representing 2D feature points from distinct views are connected by an edge if they correspond to the same 3D point.<n>Experiments on synthetic benchmarks demonstrate that C-DOG not only outperforms geometry-based baseline algorithms but also remains remarkably robust under demanding conditions.
arXiv Detail & Related papers (2025-07-18T17:23:45Z) - GraphGSOcc: Semantic-Geometric Graph Transformer with Dynamic-Static Decoupling for 3D Gaussian Splatting-based Occupancy Prediction [2.3239379129613535]
GraphGSOcc is a novel framework that combines semantic and geometric graph Transformer and decouples dynamic-static objects.<n>It achieves state-ofthe-art performance on the SurroundOcc-nuScenes, Occ3D-nuScenes, OpenOcc and KITTI occupancy benchmarks.
arXiv Detail & Related papers (2025-06-13T06:09:57Z) - COB-GS: Clear Object Boundaries in 3DGS Segmentation Based on Boundary-Adaptive Gaussian Splitting [67.03992455145325]
3D segmentation based on 3D Gaussian Splatting (3DGS) struggles with accurately delineating object boundaries.<n>We introduce Clear Object Boundaries for 3DGS (COB-GS), which aims to improve segmentation accuracy.<n>For semantic guidance, we introduce a boundary-adaptive Gaussian splitting technique.<n>For the visual optimization, we rectify the degraded texture of the 3DGS scene.
arXiv Detail & Related papers (2025-03-25T08:31:43Z) - SAGD: Boundary-Enhanced Segment Anything in 3D Gaussian via Gaussian Decomposition [66.56357905500512]
3D Gaussian Splatting has emerged as an alternative 3D representation for novel view synthesis.<n>We propose SAGD, a conceptually simple yet effective boundary-enhanced segmentation pipeline for 3D-GS.<n>Our approach achieves high-quality 3D segmentation without rough boundary issues, which can be easily applied to other scene editing tasks.
arXiv Detail & Related papers (2024-01-31T14:19:03Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Boundary-Aware Geometric Encoding for Semantic Segmentation of Point
Clouds [45.270215729464056]
Boundary information plays a significant role in 2D image segmentation, while usually being ignored in 3D point cloud segmentation.
We propose a Boundary Prediction Module (BPM) to predict boundary points.
Based on the predicted boundary, a boundary-aware Geometric.
GEM is designed to encode geometric information and aggregate features with discrimination in a neighborhood.
arXiv Detail & Related papers (2021-01-07T05:38:19Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.