Revisiting Depth Representations for Feed-Forward 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2506.05327v1
- Date: Thu, 05 Jun 2025 17:58:23 GMT
- Title: Revisiting Depth Representations for Feed-Forward 3D Gaussian Splatting
- Authors: Duochao Shi, Weijie Wang, Donny Y. Chen, Zeyu Zhang, Jia-Wang Bian, Bohan Zhuang, Chunhua Shen,
- Abstract summary: We introduce PM-Loss, a novel regularization loss based on a pointmap predicted by a pre-trained transformer.<n>With the improved depth map, our method significantly improves the feed-forward 3DGS across various architectures and scenes.
- Score: 57.43483622778394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth maps are widely used in feed-forward 3D Gaussian Splatting (3DGS) pipelines by unprojecting them into 3D point clouds for novel view synthesis. This approach offers advantages such as efficient training, the use of known camera poses, and accurate geometry estimation. However, depth discontinuities at object boundaries often lead to fragmented or sparse point clouds, degrading rendering quality -- a well-known limitation of depth-based representations. To tackle this issue, we introduce PM-Loss, a novel regularization loss based on a pointmap predicted by a pre-trained transformer. Although the pointmap itself may be less accurate than the depth map, it effectively enforces geometric smoothness, especially around object boundaries. With the improved depth map, our method significantly improves the feed-forward 3DGS across various architectures and scenes, delivering consistently better rendering results. Our project page: https://aim-uofa.github.io/PMLoss
Related papers
- GaussRender: Learning 3D Occupancy with Gaussian Rendering [86.89653628311565]
GaussRender is a module that improves 3D occupancy learning by enforcing projective consistency.<n>Our method penalizes 3D configurations that produce inconsistent 2D projections, thereby enforcing a more coherent 3D structure.
arXiv Detail & Related papers (2025-02-07T16:07:51Z) - RDG-GS: Relative Depth Guidance with Gaussian Splatting for Real-time Sparse-View 3D Rendering [13.684624443214599]
We present RDG-GS, a novel sparse-view 3D rendering framework with Relative Depth Guidance based on 3D Gaussian Splatting.<n>The core innovation lies in utilizing relative depth guidance to refine the Gaussian field, steering it towards view-consistent spatial geometric representations.<n>Across extensive experiments on Mip-NeRF360, LLFF, DTU, and Blender, RDG-GS demonstrates state-of-the-art rendering quality and efficiency.
arXiv Detail & Related papers (2025-01-19T16:22:28Z) - SparseGS: Real-Time 360° Sparse View Synthesis using Gaussian Splatting [6.506706621221143]
3D Splatting (3DGS) has recently enabled real-time rendering of 3D scenes for novel view synthesis.<n>This technique requires dense training views to accurately reconstruct 3D geometry.<n>We introduce SparseGS, an efficient training pipeline designed to address the limitations of 3DGS in scenarios with sparse training views.
arXiv Detail & Related papers (2023-11-30T21:38:22Z) - Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot
Images [47.14713579719103]
We introduce a dense depth map as a geometry guide to mitigate overfitting.
The adjusted depth aids in the color-based optimization of 3D Gaussian splatting.
We verify the proposed method on the NeRF-LLFF dataset with varying numbers of few images.
arXiv Detail & Related papers (2023-11-22T13:53:04Z) - AugUndo: Scaling Up Augmentations for Monocular Depth Completion and Estimation [51.143540967290114]
We propose a method that unlocks a wide range of previously-infeasible geometric augmentations for unsupervised depth computation and estimation.
This is achieved by reversing, or undo''-ing, geometric transformations to the coordinates of the output depth, warping the depth map back to the original reference frame.
arXiv Detail & Related papers (2023-10-15T05:15:45Z) - SparseFormer: Attention-based Depth Completion Network [2.9434930072968584]
We introduce a transformer block, SparseFormer, that fuses 3D landmarks with deep visual features to produce dense depth.
The SparseFormer has a global receptive field, making the module especially effective for depth completion with low-density and non-uniform landmarks.
To address the issue of depth outliers among the 3D landmarks, we introduce a trainable refinement module that filters outliers through attention between the sparse landmarks.
arXiv Detail & Related papers (2022-06-09T15:08:24Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - Smooth Mesh Estimation from Depth Data using Non-Smooth Convex
Optimization [28.786685021545622]
We build a 3D mesh directly from a depth map and the sparse landmarks triangulated with visual odometry.
Our approach generates a smooth and accurate 3D mesh that substantially improves the state-of-the-art on direct mesh reconstruction while running in real-time.
arXiv Detail & Related papers (2021-08-06T06:29:34Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.