PointINet: Point Cloud Frame Interpolation Network
- URL: http://arxiv.org/abs/2012.10066v1
- Date: Fri, 18 Dec 2020 06:15:01 GMT
- Title: PointINet: Point Cloud Frame Interpolation Network
- Authors: Fan Lu and Guang Chen and Sanqing Qu and Zhijun Li and Yinlong Liu and
Alois Knoll
- Abstract summary: Given two consecutive point cloud frames, Point Cloud Frame Interpolation aims to generate intermediate frame(s) between them.
Based on the proposed method, the low frame rate point cloud streams can be upsampled to higher frame rates.
We propose a novel learning-based points fusion module, which simultaneously takes two warped point clouds into consideration.
- Score: 9.626246913697427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LiDAR point cloud streams are usually sparse in time dimension, which is
limited by hardware performance. Generally, the frame rates of mechanical LiDAR
sensors are 10 to 20 Hz, which is much lower than other commonly used sensors
like cameras. To overcome the temporal limitations of LiDAR sensors, a novel
task named Point Cloud Frame Interpolation is studied in this paper. Given two
consecutive point cloud frames, Point Cloud Frame Interpolation aims to
generate intermediate frame(s) between them. To achieve that, we propose a
novel framework, namely Point Cloud Frame Interpolation Network (PointINet).
Based on the proposed method, the low frame rate point cloud streams can be
upsampled to higher frame rates. We start by estimating bi-directional 3D scene
flow between the two point clouds and then warp them to the given time step
based on the 3D scene flow. To fuse the two warped frames and generate
intermediate point cloud(s), we propose a novel learning-based points fusion
module, which simultaneously takes two warped point clouds into consideration.
We design both quantitative and qualitative experiments to evaluate the
performance of the point cloud frame interpolation method and extensive
experiments on two large scale outdoor LiDAR datasets demonstrate the
effectiveness of the proposed PointINet. Our code is available at
https://github.com/ispc-lab/PointINet.git.
Related papers
- FASTC: A Fast Attentional Framework for Semantic Traversability Classification Using Point Cloud [7.711666704468952]
We address the problem of traversability assessment using point clouds.
We propose a pillar feature extraction module that utilizes PointNet to capture features from point clouds organized in vertical volume.
We then propose a newtemporal attention module to fuse multi-frame information, which can properly handle the varying density problem of LIDAR point clouds.
arXiv Detail & Related papers (2024-06-24T12:01:55Z) - Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance
Fields [63.21420081888606]
Recent Radiance Fields and extensions are proposed to synthesize realistic images from 2D input.
We present Point2Pix as a novel point to link the 3D sparse point clouds with 2D dense image pixels.
arXiv Detail & Related papers (2023-03-29T06:26:55Z) - EPCL: Frozen CLIP Transformer is An Efficient Point Cloud Encoder [60.52613206271329]
This paper introduces textbfEfficient textbfPoint textbfCloud textbfLearning (EPCL) for training high-quality point cloud models with a frozen CLIP transformer.
Our EPCL connects the 2D and 3D modalities by semantically aligning the image features and point cloud features without paired 2D-3D data.
arXiv Detail & Related papers (2022-12-08T06:27:11Z) - PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds [100.03877236181546]
PolarMix is a point cloud augmentation technique that is simple and generic.
It can work as plug-and-play for various 3D deep architectures and also performs well for unsupervised domain adaptation.
arXiv Detail & Related papers (2022-07-30T13:52:19Z) - IDEA-Net: Dynamic 3D Point Cloud Interpolation via Deep Embedding
Alignment [58.8330387551499]
We formulate the problem as estimation of point-wise trajectories (i.e., smooth curves)
We propose IDEA-Net, an end-to-end deep learning framework, which disentangles the problem under the assistance of the explicitly learned temporal consistency.
We demonstrate the effectiveness of our method on various point cloud sequences and observe large improvement over state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-03-22T10:14:08Z) - Learning Scene Dynamics from Point Cloud Sequences [8.163697683448811]
We propose a novel problem --temporal scene flow estimation (SSFE) -- that aims to predict 3D scene flow for all pairs of point clouds in a sequence.
We introduce the SPCM-Net architecture, which solves this problem by computing multi-scale correlations between neighboring point clouds.
We demonstrate that this approach can be effectively modified for sequential point cloud forecasting.
arXiv Detail & Related papers (2021-11-16T19:52:46Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - RAI-Net: Range-Adaptive LiDAR Point Cloud Frame Interpolation Network [5.225160072036824]
LiDAR point cloud frame, which synthesizes the intermediate frame between the captured frames, has emerged as an important issue for many applications.
We propose a novel LiDAR point cloud optical frame method, which exploits range images (RIs) as an intermediate representation with CNNs to conduct the frame process.
Our method consistently achieves superior frame results with better perceptual quality to that of using state-of-the-art video frame methods.
arXiv Detail & Related papers (2021-06-01T13:59:08Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.