Dynamic Point Cloud Denoising via Manifold-to-Manifold Distance
- URL: http://arxiv.org/abs/2003.08355v3
- Date: Wed, 28 Oct 2020 12:53:04 GMT
- Title: Dynamic Point Cloud Denoising via Manifold-to-Manifold Distance
- Authors: Wei Hu, Qianjiang Hu, Zehua Wang, Xiang Gao
- Abstract summary: We represent dynamic point clouds naturally on spatial-temporal graphs, and exploit the temporal consistency with respect to the underlying surface (manifold)
We formulate dynamic point cloud denoising as the joint optimization of the desired point cloud and underlying graph representation.
Experimental results show that the proposed method significantly outperforms independent denoising of each frame from state-of-the-art static point cloud denoising approaches.
- Score: 24.174744253496513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D dynamic point clouds provide a natural discrete representation of
real-world objects or scenes in motion, with a wide range of applications in
immersive telepresence, autonomous driving, surveillance, \etc. Nevertheless,
dynamic point clouds are often perturbed by noise due to hardware, software or
other causes. While a plethora of methods have been proposed for static point
cloud denoising, few efforts are made for the denoising of dynamic point
clouds, which is quite challenging due to the irregular sampling patterns both
spatially and temporally. In this paper, we represent dynamic point clouds
naturally on spatial-temporal graphs, and exploit the temporal consistency with
respect to the underlying surface (manifold). In particular, we define a
manifold-to-manifold distance and its discrete counterpart on graphs to measure
the variation-based intrinsic distance between surface patches in the temporal
domain, provided that graph operators are discrete counterparts of functionals
on Riemannian manifolds. Then, we construct the spatial-temporal graph
connectivity between corresponding surface patches based on the temporal
distance and between points in adjacent patches in the spatial domain.
Leveraging the initial graph representation, we formulate dynamic point cloud
denoising as the joint optimization of the desired point cloud and underlying
graph representation, regularized by both spatial smoothness and temporal
consistency. We reformulate the optimization and present an efficient
algorithm. Experimental results show that the proposed method significantly
outperforms independent denoising of each frame from state-of-the-art static
point cloud denoising approaches, on both Gaussian noise and simulated LiDAR
noise.
Related papers
- Fast Learning of Signed Distance Functions from Noisy Point Clouds via Noise to Noise Mapping [54.38209327518066]
Learning signed distance functions from point clouds is an important task in 3D computer vision.
We propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision.
Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy observations.
arXiv Detail & Related papers (2024-07-04T03:35:02Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - Dynamic Point Cloud Denoising via Gradient Fields [17.29921488701806]
3D dynamic point clouds provide a discrete representation of real-world objects or scenes in motion.
Point clouds acquired from sensors are usually perturbed by noise, which affects downstream tasks such as surface reconstruction and analysis.
We propose a novel gradient-field-based dynamic point cloud denoising method, exploiting the temporal correspondence via the estimation of gradient fields.
arXiv Detail & Related papers (2022-04-19T08:51:53Z) - IDEA-Net: Dynamic 3D Point Cloud Interpolation via Deep Embedding
Alignment [58.8330387551499]
We formulate the problem as estimation of point-wise trajectories (i.e., smooth curves)
We propose IDEA-Net, an end-to-end deep learning framework, which disentangles the problem under the assistance of the explicitly learned temporal consistency.
We demonstrate the effectiveness of our method on various point cloud sequences and observe large improvement over state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-03-22T10:14:08Z) - Exploring the Devil in Graph Spectral Domain for 3D Point Cloud Attacks [19.703181080679176]
3D dynamic point clouds provide a discrete representation of real-world objects or scenes in motion.
Point clouds acquired from sensors are usually perturbed by noise, which affects downstream tasks such as surface reconstruction and analysis.
We propose a novel gradient-based dynamic point cloud denoising method, exploiting the temporal correspondence for the estimation of gradient fields.
arXiv Detail & Related papers (2022-02-15T09:16:12Z) - Deep Point Set Resampling via Gradient Fields [11.5128379063303]
3D point clouds acquired by scanning real-world objects or scenes have found a wide range of applications.
They are often perturbed by noise or suffer from low density, which obstructs downstream tasks such as surface reconstruction and understanding.
We propose a novel paradigm of point set resampling for restoration, which learns continuous gradient fields of point clouds.
arXiv Detail & Related papers (2021-11-03T07:20:35Z) - TPCN: Temporal Point Cloud Networks for Motion Forecasting [47.829152433166016]
We propose a novel framework with joint spatial and temporal learning for trajectory prediction.
In the spatial dimension, agents can be viewed as an unordered point set, and thus it is straightforward to apply point cloud learning techniques to model agents' locations.
Experiments on the Argoverse motion forecasting benchmark show that our approach achieves the state-of-the-art results.
arXiv Detail & Related papers (2021-03-04T14:44:32Z) - CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations [72.4716073597902]
We propose a method to learn object Canonical Point Cloud Representations of dynamically or moving objects.
We demonstrate the effectiveness of our method on several applications including shape reconstruction, camera pose estimation, continuoustemporal sequence reconstruction, and correspondence estimation.
arXiv Detail & Related papers (2020-08-06T17:58:48Z) - Learning Graph-Convolutional Representations for Point Cloud Denoising [31.557988478764997]
We propose a deep neural network that can deal with the permutation-invariance problem encountered by learning-based point cloud processing methods.
The network is fully-convolutional and can build complex hierarchies of features by dynamically constructing neighborhood graphs.
It is especially robust both at high noise levels and in presence of structured noise such as the one encountered in real LiDAR scans.
arXiv Detail & Related papers (2020-07-06T08:11:28Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.