Revising Densification in Gaussian Splatting
- URL: http://arxiv.org/abs/2404.06109v1
- Date: Tue, 9 Apr 2024 08:20:37 GMT
- Title: Revising Densification in Gaussian Splatting
- Authors: Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder,
- Abstract summary: We introduce a pixel-error driven formulation for density control in 3DGS, leveraging an auxiliary, per-pixel error function as the criterion for densification.
Our approach leads to consistent quality improvements across a variety of benchmark scenes, without sacrificing the method's efficiency.
- Score: 23.037676471903215
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we address the limitations of Adaptive Density Control (ADC) in 3D Gaussian Splatting (3DGS), a scene representation method achieving high-quality, photorealistic results for novel view synthesis. ADC has been introduced for automatic 3D point primitive management, controlling densification and pruning, however, with certain limitations in the densification logic. Our main contribution is a more principled, pixel-error driven formulation for density control in 3DGS, leveraging an auxiliary, per-pixel error function as the criterion for densification. We further introduce a mechanism to control the total number of primitives generated per scene and correct a bias in the current opacity handling strategy of ADC during cloning operations. Our approach leads to consistent quality improvements across a variety of benchmark scenes, without sacrificing the method's efficiency.
Related papers
- MVG-Splatting: Multi-View Guided Gaussian Splatting with Adaptive Quantile-Based Geometric Consistency Densification [8.099621725105857]
We introduce MVG-Splatting, a solution guided by Multi-View considerations.
We propose an adaptive quantile-based method that dynamically determines the level of additional densification.
This approach significantly enhances the overall fidelity and accuracy of the 3D reconstruction process.
arXiv Detail & Related papers (2024-07-16T15:24:01Z) - Gaussian Splatting with Localized Points Management [52.009874685460694]
Localized Point Management (LPM) is capable of identifying those error-contributing zones in the highest demand for both point addition and geometry calibration.
LPM applies point densification in the identified zone, whilst resetting the opacity of those points residing in front of these regions so that a new opportunity is created to correct ill-conditioned points.
Notably, LPM improves both vanilla 3DGS and SpaceTimeGS to achieve state-of-the-art rendering quality while retaining real-time speeds.
arXiv Detail & Related papers (2024-06-06T16:55:07Z) - LP-3DGS: Learning to Prune 3D Gaussian Splatting [71.97762528812187]
We propose learning-to-prune 3DGS, where a trainable binary mask is applied to the importance score that can find optimal pruning ratio automatically.
Experiments have shown that LP-3DGS consistently produces a good balance that is both efficient and high quality.
arXiv Detail & Related papers (2024-05-29T05:58:34Z) - End-to-End Rate-Distortion Optimized 3D Gaussian Representation [33.20840558425759]
We formulate the compact 3D Gaussian learning as an end-to-end Rate-Distortion Optimization problem.
We introduce dynamic pruning and entropy-constrained vector quantization (ECVQ) that optimize the rate and distortion at the same time.
We verify our method on both real and synthetic scenes, showcasing that RDO-Gaussian greatly reduces the size of 3D Gaussian over 40x.
arXiv Detail & Related papers (2024-04-09T14:37:54Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - Experimental 3D super-localization with Laguerre-Gaussian modes [22.67311839285875]
In this work, we rigorously derive the ultimate 3D localization limits of Laguerre-Gaussian (LG) modes and their superposition.
Our findings reveal that a significant portion of the information required for achieving 3D super-localization of LG modes can be obtained through feasible intensity detection.
In the presence of realistic aberration, the algorithm robustly achieves the Cram'er-Rao lower bound.
arXiv Detail & Related papers (2023-12-18T09:19:20Z) - CoGS: Controllable Gaussian Splatting [5.909271640907126]
Controllable Gaussian Splatting (CoGS) is a new method for capturing and re-animating 3D structures.
CoGS offers real-time control of dynamic scenes without the prerequisite of pre-computing control signals.
In our evaluations, CoGS consistently outperformed existing dynamic and controllable neural representations in terms of visual fidelity.
arXiv Detail & Related papers (2023-12-09T20:06:29Z) - Towards Model Generalization for Monocular 3D Object Detection [57.25828870799331]
We present an effective unified camera-generalized paradigm (CGP) for Mono3D object detection.
We also propose the 2D-3D geometry-consistent object scaling strategy (GCOS) to bridge the gap via an instance-level augment.
Our method called DGMono3D achieves remarkable performance on all evaluated datasets and surpasses the SoTA unsupervised domain adaptation scheme.
arXiv Detail & Related papers (2022-05-23T23:05:07Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.