LiNeXt: Revisiting LiDAR Completion with Efficient Non-Diffusion Architectures
- URL: http://arxiv.org/abs/2511.10209v1
- Date: Fri, 14 Nov 2025 01:39:07 GMT
- Title: LiNeXt: Revisiting LiDAR Completion with Efficient Non-Diffusion Architectures
- Authors: Wenzhe He, Xiaojun Chen, Ruiqi Wang, Ruihui Li, Huilong Pi, Jiapeng Zhang, Zhuo Tang, Kenli Li,
- Abstract summary: LiNeXt is a lightweight, non-diffusion network optimized for rapid and accurate point cloud completion.<n>LiNeXt achieves a 199.8x speedup in inference, reduces Chamfer Distance by 50.7%, and uses only 6.1% of the parameters compared with LiDiff.
- Score: 52.59993476989728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D LiDAR scene completion from point clouds is a fundamental component of perception systems in autonomous vehicles. Previous methods have predominantly employed diffusion models for high-fidelity reconstruction. However, their multi-step iterative sampling incurs significant computational overhead, limiting its real-time applicability. To address this, we propose LiNeXt-a lightweight, non-diffusion network optimized for rapid and accurate point cloud completion. Specifically, LiNeXt first applies the Noise-to-Coarse (N2C) Module to denoise the input noisy point cloud in a single pass, thereby obviating the multi-step iterative sampling of diffusion-based methods. The Refine Module then takes the coarse point cloud and its intermediate features from the N2C Module to perform more precise refinement, further enhancing structural completeness. Furthermore, we observe that LiDAR point clouds exhibit a distance-dependent spatial distribution, being densely sampled at proximal ranges and sparsely sampled at distal ranges. Accordingly, we propose the Distance-aware Selected Repeat strategy to generate a more uniformly distributed noisy point cloud. On the SemanticKITTI dataset, LiNeXt achieves a 199.8x speedup in inference, reduces Chamfer Distance by 50.7%, and uses only 6.1% of the parameters compared with LiDiff. These results demonstrate the superior efficiency and effectiveness of LiNeXt for real-time scene completion.
Related papers
- Adaptive Spectral Feature Forecasting for Diffusion Sampling Acceleration [58.19554276924402]
We propose spectral diffusion feature forecaster (Spectrum) to enable global, long-range feature reuse with tightly controlled error.<n>We achieve up to 4.79$times$ speedup on FLUX.1 and 4.67$times$ speedup on Wan2.1-14B, while maintaining much higher sample quality compared with the baselines.
arXiv Detail & Related papers (2026-03-02T08:59:11Z) - PUFM++: Point Cloud Upsampling via Enhanced Flow Matching [15.738247394527024]
PUFM++ is an enhanced flow-matching framework for reconstructing point clouds from sparse, noisy, and partial observations.<n>We introduce a two-stage flow-matching strategy that first learns a direct, straight-path flow from sparse inputs to dense targets, and then refines it using noise-perturbed samples to approximate the terminal marginal distribution better.<n>Experiments on synthetic benchmarks and real-world scans show that PUFM++ sets a new state of the art in point cloud upsampling.
arXiv Detail & Related papers (2025-12-24T06:30:42Z) - Adaptive Dual-Weighted Gravitational Point Cloud Denoising Method [10.397999108705962]
This paper proposes an adaptive dual-weight gravitational-based point cloud denoising method.<n>It achieves consistent improvements in F1, PSNR, and Chamfer Distance across various noise conditions.<n>It also reduces the single-frame processing time, thereby validating its high accuracy, robustness, and real-time performance in multi-noise scenarios.
arXiv Detail & Related papers (2025-12-11T07:49:28Z) - PVNet: Point-Voxel Interaction LiDAR Scene Upsampling Via Diffusion Models [57.02789948234898]
We propose PVNet, a diffusion model-based point-voxel interaction framework to perform LiDAR point cloud upsampling without dense supervision.<n>Specifically, we employ a sparse point cloud as the guiding condition and the synthesized point clouds derived from its nearby frames as the input.<n>In addition, we propose a point-voxel interaction module to integrate features from both points and voxels, which efficiently improves the environmental perception capability of each upsampled point.
arXiv Detail & Related papers (2025-08-23T14:55:03Z) - Efficient Point Clouds Upsampling via Flow Matching [16.948354780275388]
Existing diffusion models struggle with inefficiencies as they map Gaussian noise to real point clouds.<n>We propose PUFM, a flow matching approach to directly map sparse point clouds to their high-fidelity dense counterparts.<n>Our method delivers superior upsampling quality but with fewer sampling steps.
arXiv Detail & Related papers (2025-01-25T17:50:53Z) - Diffusion-Occ: 3D Point Cloud Completion via Occupancy Diffusion [5.189790379672664]
We introduce textbfDiffusion-Occ, a novel framework for Diffusion Point Cloud Completion.
By thresholding the occupancy field, we convert it into a complete point cloud.
Experimental results demonstrate that Diffusion-Occ outperforms existing discriminative and generative methods.
arXiv Detail & Related papers (2024-08-27T07:57:58Z) - RangeLDM: Fast Realistic LiDAR Point Cloud Generation [12.868053836790194]
We introduce RangeLDM, a novel approach for rapidly generating high-quality range-view LiDAR point clouds.
We achieve this by correcting range-view data distribution for accurate projection from point clouds to range images via Hough voting.
We instruct the model to preserve 3D structural fidelity by devising a range-guided discriminator.
arXiv Detail & Related papers (2024-03-15T08:19:57Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds [76.52448276587707]
We propose Reconfigurable Voxels, a new approach to constructing representations from 3D point clouds.
Specifically, we devise a biased random walk scheme, which adaptively covers each neighborhood with a fixed number of voxels.
We find that this approach effectively improves the stability of voxel features, especially for sparse regions.
arXiv Detail & Related papers (2020-04-06T15:07:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.