TRIM: Scalable 3D Gaussian Diffusion Inference with Temporal and Spatial Trimming
- URL: http://arxiv.org/abs/2511.16642v1
- Date: Thu, 20 Nov 2025 18:49:09 GMT
- Title: TRIM: Scalable 3D Gaussian Diffusion Inference with Temporal and Spatial Trimming
- Authors: Zeyuan Yin, Xiaoming Liu,
- Abstract summary: Recent advances in 3D Gaussian diffusion models suffer from time-intensive denoising and post-denoising processing.<n>We propose $textbfTRIM$ ($textbfT$rajectory $textbfR$eduction and $textbfI$nstance $textbfM$ask denoising.
- Score: 10.73970270886881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in 3D Gaussian diffusion models suffer from time-intensive denoising and post-denoising processing due to the massive number of Gaussian primitives, resulting in slow generation and limited scalability along sampling trajectories. To improve the efficiency of 3D diffusion models, we propose $\textbf{TRIM}$ ($\textbf{T}$rajectory $\textbf{R}$eduction and $\textbf{I}$nstance $\textbf{M}$ask denoising), a post-training approach that incorporates both temporal and spatial trimming strategies, to accelerate inference without compromising output quality while supporting the inference-time scaling for Gaussian diffusion models. Instead of scaling denoising trajectories in a costly end-to-end manner, we develop a lightweight selector model to evaluate latent Gaussian primitives derived from multiple sampled noises, enabling early trajectory reduction by selecting candidates with high-quality potential. Furthermore, we introduce instance mask denoising to prune learnable Gaussian primitives by filtering out redundant background regions, reducing inference computation at each denoising step. Extensive experiments and analysis demonstrate that TRIM significantly improves both the efficiency and quality of 3D generation. Source code is available at $\href{https://github.com/zeyuanyin/TRIM}{link}$.
Related papers
- Diffusion Model-Based Posterior Sampling in Full Waveform Inversion [3.2800968305157205]
posterior sampling directly on observed seismic shot records is rarely practical at the field scale.<n>Our approach couples diffusion-based posterior sampling with simultaneous-source waveform inversion data.<n>Our method achieves lower model error and better data fit at a substantially reduced computational cost.
arXiv Detail & Related papers (2025-12-14T18:34:12Z) - ITS3D: Inference-Time Scaling for Text-Guided 3D Diffusion Models [88.04431808574581]
ITS3D is a framework that formulates the task as an optimization problem to identify the most effective Gaussian noise input.<n>We introduce three techniques for improved stability, efficiency, and exploration capability.<n>Experiments demonstrate that ITS3D enhances text-to-3D generation quality.
arXiv Detail & Related papers (2025-11-27T13:46:16Z) - ReSplat: Learning Recurrent Gaussian Splats [98.14472247275512]
ReSplat is a feed-forward recurrent Gaussian splatting model that iteratively refines 3D Gaussians without explicitly computing gradients.<n>We introduce a compact reconstruction model that operates in a $16 times$ subsampled space, producing $16 times$ fewer Gaussians than previous per-pixel Gaussian models.<n>Our method achieves state-of-the-art performance while significantly reducing the number of Gaussians and improving the rendering speed.
arXiv Detail & Related papers (2025-10-09T17:59:59Z) - Parallel Sampling of Diffusion Models on $SO(3)$ [6.950206740436355]
In this paper, we design an algorithm to accelerate the diffusion process on the $SO(3)$ manifold.<n>Experiments reveal that our algorithm achieves a speed-up of up to 4.9$times$, significantly reducing the latency for generating a single sample.
arXiv Detail & Related papers (2025-07-14T14:51:02Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Metropolis-Hastings Sampling for 3D Gaussian Reconstruction [31.840492077537018]
We propose an adaptive sampling framework for 3D Gaussian Splatting (3DGS)<n>Our framework overcomes limitations by reformulating densification and pruning as a probabilistic sampling process.<n>Our approach achieves faster convergence while matching or modestly surpassing the view-synthesis quality of state-of-the-art models.
arXiv Detail & Related papers (2025-06-15T19:12:37Z) - Noise Conditional Variational Score Distillation [60.38982038894823]
Noise Conditional Variational Score Distillation (NCVSD) is a novel method for distilling pretrained diffusion models into generative denoisers.<n>By integrating this insight into the Variational Score Distillation framework, we enable scalable learning of generative denoisers.
arXiv Detail & Related papers (2025-06-11T06:01:39Z) - Second-order Optimization of Gaussian Splats with Importance Sampling [51.95046424364725]
3D Gaussian Splatting (3DGS) is widely used for novel view rendering due to its high quality and fast inference time.<n>We propose a novel second-order optimization strategy based on Levenberg-Marquardt (LM) and Conjugate Gradient (CG)<n>Our method achieves a $3times$ speedup over standard LM and outperforms Adam by $6times$ when the Gaussian count is low.
arXiv Detail & Related papers (2025-04-17T12:52:08Z) - Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis [53.702118455883095]
We propose a novel method for synthesizing novel views from sparse views with Gaussian Splatting.
Our key idea lies in exploring the self-supervisions inherent in the binocular stereo consistency between each pair of binocular images.
Our method significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-10-24T15:10:27Z) - GaussianSR: 3D Gaussian Super-Resolution with 2D Diffusion Priors [14.743494200205754]
High-resolution novel view synthesis (HRNVS) from low-resolution input views is a challenging task due to the lack of high-resolution data.
Previous methods optimize high-resolution Neural Radiance Field (NeRF) from low-resolution input views but suffer from slow rendering speed.
In this work, we base our method on 3D Gaussian Splatting (3DGS) due to its capability of producing high-quality images at a faster rendering speed.
arXiv Detail & Related papers (2024-06-14T15:19:21Z) - Learning to Discretize Denoising Diffusion ODEs [41.50816120270017]
Diffusion Probabilistic Models (DPMs) are generative models showing competitive performance in various domains.<n>We propose LD3, a lightweight framework designed to learn the optimal time discretization for sampling.<n>We demonstrate analytically and empirically that LD3 improves sampling efficiency with much less computational overhead.
arXiv Detail & Related papers (2024-05-24T12:51:23Z) - DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation [55.661467968178066]
We propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously.
Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space.
In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks.
arXiv Detail & Related papers (2023-09-28T17:55:05Z) - Accelerating Diffusion Models via Early Stop of the Diffusion Process [114.48426684994179]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved impressive performance on various generation tasks.
In practice DDPMs often need hundreds even thousands of denoising steps to obtain a high-quality sample.
We propose a principled acceleration strategy, referred to as Early-Stopped DDPM (ES-DDPM), for DDPMs.
arXiv Detail & Related papers (2022-05-25T06:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.