Revisiting 3D Reconstruction Kernels as Low-Pass Filters
- URL: http://arxiv.org/abs/2601.17900v1
- Date: Sun, 25 Jan 2026 16:37:16 GMT
- Title: Revisiting 3D Reconstruction Kernels as Low-Pass Filters
- Authors: Shengjun Zhang, Min Chen, Yibo Wei, Mingyu Dong, Yueqi Duan,
- Abstract summary: 3D reconstruction is to recover 3D signals from the sampled discrete 2D pixels.<n>In this paper, we revisit 3D reconstruction from the perspective of signal processing.<n>We introduce Jinc kernel with an instantaneous drop to zero magnitude exactly at the cutoff frequency.
- Score: 29.366077791499738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D reconstruction is to recover 3D signals from the sampled discrete 2D pixels, with the goal to converge continuous 3D spaces. In this paper, we revisit 3D reconstruction from the perspective of signal processing, identifying the periodic spectral extension induced by discrete sampling as the fundamental challenge. Previous 3D reconstruction kernels, such as Gaussians, Exponential functions, and Student's t distributions, serve as the low pass filters to isolate the baseband spectrum. However, their unideal low-pass property results in the overlap of high-frequency components with low-frequency components in the discrete-time signal's spectrum. To this end, we introduce Jinc kernel with an instantaneous drop to zero magnitude exactly at the cutoff frequency, which is corresponding to the ideal low pass filters. As Jinc kernel suffers from low decay speed in the spatial domain, we further propose modulated kernels to strick an effective balance, and achieves superior rendering performance by reconciling spatial efficiency and frequency-domain fidelity. Experimental results have demonstrated the effectiveness of our Jinc and modulated kernels.
Related papers
- WaveletGaussian: Wavelet-domain Diffusion for Sparse-view 3D Gaussian Object Reconstruction [5.93524554224854]
We present WaveletGaussian, a framework for more efficient sparse-view 3D Gaussian object reconstruction.<n>Our key idea is to shift diffusion into the wavelet domain, while high-frequency subbands are refined with a lightweight network.<n> Experiments across two benchmark datasets, Mip-NeRF 360 and Omni3D, show WaveletGaussian achieves competitive rendering quality.
arXiv Detail & Related papers (2025-09-23T14:34:10Z) - 3DGabSplat: 3D Gabor Splatting for Frequency-adaptive Radiance Field Rendering [50.04967868036964]
3D Gaussian Splatting (3DGS) has enabled real-time rendering while maintaining high-fidelity novel view synthesis.<n>We propose 3D Gabor Splatting (3DGabSplat) that incorporates a novel 3D Gabor-based primitive with multiple directional 3D frequency responses.<n>We achieve 1.35 dBR gain over 3D with simultaneously reduced number of primitive memory consumption.
arXiv Detail & Related papers (2025-08-07T12:49:44Z) - From Coarse to Fine: Learnable Discrete Wavelet Transforms for Efficient 3D Gaussian Splatting [5.026688852582894]
AutoOpti3DGS is a training-time framework that automatically restrains Gaussian proliferation without sacrificing visual fidelity.<n>Wavelet-driven, coarse-to-fine process delays the formation of redundant fine Gaussians.
arXiv Detail & Related papers (2025-06-29T00:27:17Z) - PSRGS:Progressive Spectral Residual of 3D Gaussian for High-Frequency Recovery [3.310033172069517]
3D Gaussian Splatting (3D GS) achieves impressive results in novel view synthesis for small, single-object scenes.<n>However, when applied to large-scale remote sensing scenes, 3D GS faces challenges.<n>We propose PSRGS, a progressive optimization scheme based on spectral residual maps.
arXiv Detail & Related papers (2025-03-02T10:52:46Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.<n>3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - Spectral-GS: Taming 3D Gaussian Splatting with Spectral Entropy [14.320240635262756]
3D-GS lacks shape awareness, relying instead on spectral radius and view positional gradients to determine splitting.
Our Spectral-GS, based on spectral analysis, introduces 3D shape-aware splitting and 2D view-consistent filtering strategies.
arXiv Detail & Related papers (2024-09-19T13:38:04Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization [67.47895278233717]
We develop a progressive frequency regularization technique to tackle the over-reconstruction issue within the frequency space.
FreGS achieves superior novel view synthesis and outperforms the state-of-the-art consistently.
arXiv Detail & Related papers (2024-03-11T17:00:27Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.