Point upsampling networks for single-photon sensing
- URL: http://arxiv.org/abs/2508.12986v1
- Date: Mon, 18 Aug 2025 15:05:27 GMT
- Title: Point upsampling networks for single-photon sensing
- Authors: Jinyi Liu, Guoyang Zhao, Lijun Liu, Yiguang Hong, Weiping Zhang, Shuming Cheng,
- Abstract summary: We propose using point upsampling networks to increase point density and reduce spatial distortion in single-photon point cloud.<n>Our network is built on the state space model which integrates a multi-path scanning mechanism to enrich spatial context.<n>Experiments are implemented on commonly-used datasets to confirm its high reconstruction accuracy and strong robustness to the distortion noise.
- Score: 4.966091556725777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single-photon sensing has generated great interest as a prominent technique of long-distance and ultra-sensitive imaging, however, it tends to yield sparse and spatially biased point clouds, thus limiting its practical utility. In this work, we propose using point upsampling networks to increase point density and reduce spatial distortion in single-photon point cloud. Particularly, our network is built on the state space model which integrates a multi-path scanning mechanism to enrich spatial context, a bidirectional Mamba backbone to capture global geometry and local details, and an adaptive upsample shift module to correct offset-induced distortions. Extensive experiments are implemented on commonly-used datasets to confirm its high reconstruction accuracy and strong robustness to the distortion noise, and also on real-world data to demonstrate that our model is able to generate visually consistent, detail-preserving, and noise suppressed point clouds. Our work is the first to establish the upsampling framework for single-photon sensing, and hence opens a new avenue for single-photon sensing and its practical applications in the downstreaming tasks.
Related papers
- DiffPCN: Latent Diffusion Model Based on Multi-view Depth Images for Point Cloud Completion [63.89701893364156]
We propose DiffPCN, a novel diffusion-based coarse-to-fine framework for point cloud completion.<n>Our approach comprises two stages: an initial stage for generating coarse point clouds, and a refinement stage that improves their quality.<n> Experimental results demonstrate that our DiffPCN achieves state-of-the-art performance in geometric accuracy and shape completeness.
arXiv Detail & Related papers (2025-09-28T08:05:43Z) - Boosting Zero-shot Stereo Matching using Large-scale Mixed Images Sources in the Real World [8.56549004133167]
Stereo matching methods rely on dense pixel-wise ground truth labels.<n>The scarcity of labeled data and domain gaps between synthetic and real-world images pose notable challenges.<n>We propose a novel framework, textbfBooSTer, that leverages both vision foundation models and large-scale mixed image sources.
arXiv Detail & Related papers (2025-05-13T14:24:38Z) - LPRnet: A self-supervised registration network for LiDAR and photogrammetric point clouds [38.42527849407057]
LiDAR and photogrammetry are active and passive remote sensing techniques for point cloud acquisition, respectively.<n>Due to the fundamental differences in sensing mechanisms, spatial distributions and coordinate systems, their point clouds exhibit significant discrepancies in density, precision, noise, and overlap.<n>This paper proposes a self-supervised registration network based on a masked autoencoder, focusing on heterogeneous LiDAR and photogrammetric point clouds.
arXiv Detail & Related papers (2025-01-10T02:36:37Z) - PDF: Point Diffusion Implicit Function for Large-scale Scene Neural
Representation [24.751481680565803]
We propose a Point implicit Function, PDF, for large-scale scene neural representation.
The core of our method is a large-scale point cloud super-resolution diffusion module.
The region sampling based on Mip-NeRF 360 is employed to model the background representation.
arXiv Detail & Related papers (2023-11-03T08:19:47Z) - A ground-based dataset and a diffusion model for on-orbit low-light image enhancement [7.815138548685792]
We propose a dataset of the Beidou Navigation Satellite for on-orbit low-light image enhancement (LLIE)
To evenly sample poses of different orientation and distance without collision, a collision-free working space and pose stratified sampling is proposed.
To enhance the image contrast without over-exposure and blurring details, we design a fused attention to highlight the structure and dark region.
arXiv Detail & Related papers (2023-06-25T12:15:44Z) - DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier
Convolution for Low-light Image Enhancement [1.2645663389012574]
Low-light image enhancement is a classical computer vision problem aiming to recover normal-exposure images from low-light images.
convolutional neural networks commonly used in this field are good at sampling low-frequency local structural features in the spatial domain.
We propose a novel module using the Fourier coefficients, which can recover high-quality texture details under the constraint of semantics in the frequency phase.
arXiv Detail & Related papers (2022-09-16T13:56:09Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Point Cloud Upsampling via Disentangled Refinement [86.3641957163818]
Point clouds produced by 3D scanning are often sparse, non-uniform, and noisy.
Recent upsampling approaches aim to generate a dense point set, while achieving both distribution uniformity and proximity-to-surface.
We formulate two cascaded sub-networks, a dense generator and a spatial refiner.
arXiv Detail & Related papers (2021-06-09T02:58:42Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.