HaloGS: Loose Coupling of Compact Geometry and Gaussian Splats for 3D Scenes
- URL: http://arxiv.org/abs/2505.20267v1
- Date: Mon, 26 May 2025 17:44:04 GMT
- Title: HaloGS: Loose Coupling of Compact Geometry and Gaussian Splats for 3D Scenes
- Authors: Changjian Jiang, Kerui Ren, Linning Xu, Jiong Chen, Jiangmiao Pang, Yu Zhang, Bo Dai, Mulin Yu,
- Abstract summary: HaloGS is a dual representation that loosely couples coarse triangles for geometry with Gaussian primitives for appearance.<n>Our design yields a compact yet expressive model capable of photo realistic rendering across both indoor and outdoor environments.
- Score: 23.141875500453484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High fidelity 3D reconstruction and rendering hinge on capturing precise geometry while preserving photo realistic detail. Most existing methods either fuse these goals into a single cumbersome model or adopt hybrid schemes whose uniform primitives lead to a trade off between efficiency and fidelity. In this paper, we introduce HaloGS, a dual representation that loosely couples coarse triangles for geometry with Gaussian primitives for appearance, motivated by the lightweight classic geometry representations and their proven efficiency in real world applications. Our design yields a compact yet expressive model capable of photo realistic rendering across both indoor and outdoor environments, seamlessly adapting to varying levels of scene complexity. Experiments on multiple benchmark datasets demonstrate that our method yields both compact, accurate geometry and high fidelity renderings, especially in challenging scenarios where robust geometric structure make a clear difference.
Related papers
- AnchoredDream: Zero-Shot 360° Indoor Scene Generation from a Single View via Geometric Grounding [58.90269958632018]
Single-view indoor scene generation plays a crucial role in a range of real-world applications.<n>Recent approaches have made progress by leveraging diffusion models and depth estimation networks.<n>We propose AnchoredDream, a novel zero-shot pipeline that anchors 360 scene generation on high-fidelity geometry.
arXiv Detail & Related papers (2026-01-23T08:08:12Z) - Spherical Geometry Diffusion: Generating High-quality 3D Face Geometry via Sphere-anchored Representations [18.442834011472005]
A fundamental challenge in text-to-3D face generation is achieving high-quality geometry.<n>We introduce the Spherical Geometry Representation, a novel face representation that anchors geometric signals to uniform spherical coordinates.<n>We then introduce Spherical Diffusion Geometry, a conditional diffusion framework built upon this 2D map.
arXiv Detail & Related papers (2026-01-19T20:15:45Z) - GeoWorld: Unlocking the Potential of Geometry Models to Facilitate High-Fidelity 3D Scene Generation [68.02988074681427]
Previous works leveraging video models for image-to-3D scene generation tend to suffer from geometric distortions and blurry content.<n>In this paper, we renovate the pipeline of image-to-3D scene generation by unlocking the potential of geometry models.<n>Our GeoWorld can generate high-fidelity 3D scenes from a single image and a given camera trajectory, outperforming prior methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2025-11-28T13:55:45Z) - Aligned Novel View Image and Geometry Synthesis via Cross-modal Attention Instillation [62.87088388345378]
We introduce a diffusion-based framework that performs aligned novel view image and geometry generation via a warping-and-inpainting methodology.<n>Method leverages off-the-shelf geometry predictors to predict partial geometries viewed from reference images.<n>Cross-modal attention distillation is proposed to ensure accurate alignment between generated images and geometry.
arXiv Detail & Related papers (2025-06-13T16:19:00Z) - GTR: Gaussian Splatting Tracking and Reconstruction of Unknown Objects Based on Appearance and Geometric Complexity [49.31257173003408]
We present a novel method for 6-DoF object tracking and high-quality 3D reconstruction from monocular RGBD video.<n>Our approach demonstrates strong capabilities in recovering high-fidelity object meshes, setting a new standard for single-sensor 3D reconstruction in open-world environments.
arXiv Detail & Related papers (2025-05-17T08:46:29Z) - Hi3DGen: High-fidelity 3D Geometry Generation from Images via Normal Bridging [15.36983068580743]
Hi3DGen is a novel framework for generating high-fidelity 3D geometry from images via normal bridging.<n>Our work provides a new direction for high-fidelity 3D geometry generation from images by leveraging normal maps as an intermediate representation.
arXiv Detail & Related papers (2025-03-28T08:39:20Z) - LineGS : 3D Line Segment Representation on 3D Gaussian Splatting [0.0]
LineGS is a novel method that combines geometry-guided 3D line reconstruction with a 3D Gaussian splatting model.<n>The results show significant improvements in both geometric accuracy and model compactness compared to baseline methods.
arXiv Detail & Related papers (2024-11-30T13:29:36Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics [16.446659867133977]
PartGS is a self-supervised part-aware reconstruction framework that integrates 2D Gaussians and superquadrics to parse objects and scenes into an interpretable decomposition.<n>Our approach demonstrates superior performance compared to state-of-the-art methods across extensive experiments on the DTU, ShapeNet, and real-world datasets.
arXiv Detail & Related papers (2024-08-20T12:30:37Z) - GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - Direct Learning of Mesh and Appearance via 3D Gaussian Splatting [2.8424636089338216]
We propose a learnable scene model that incorporates 3DGS with an explicit geometry representation, namely a mesh.<n>Our model learns the mesh and appearance in an end-to-end manner, where we bind 3D Gaussians to the mesh faces and perform differentiable rendering of 3DGS to obtain photometric supervision.
arXiv Detail & Related papers (2024-05-11T07:56:19Z) - SAGS: Structure-Aware 3D Gaussian Splatting [53.6730827668389]
We propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene.
SAGS reflects to state-of-the-art rendering performance and reduced storage requirements on benchmark novel-view synthesis datasets.
arXiv Detail & Related papers (2024-04-29T23:26:30Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.