GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud
Compression
- URL: http://arxiv.org/abs/2209.04401v1
- Date: Fri, 9 Sep 2022 17:09:02 GMT
- Title: GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud
Compression
- Authors: Jiahao Pang, Muhammad Asad Lodhi, Dong Tian
- Abstract summary: We propose a heterogeneous approach with deep learning for lossy point cloud geometry compression.
Specifically, a point-based network is applied to convert the erratic local details to latent features residing on the coarse point cloud.
- Score: 16.98171403698783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud compression (PCC) is a key enabler for various 3-D applications,
owing to the universality of the point cloud format. Ideally, 3D point clouds
endeavor to depict object/scene surfaces that are continuous. Practically, as a
set of discrete samples, point clouds are locally disconnected and sparsely
distributed. This sparse nature is hindering the discovery of local correlation
among points for compression. Motivated by an analysis with fractal dimension,
we propose a heterogeneous approach with deep learning for lossy point cloud
geometry compression. On top of a base layer compressing a coarse
representation of the input, an enhancement layer is designed to cope with the
challenging geometric residual/details. Specifically, a point-based network is
applied to convert the erratic local details to latent features residing on the
coarse point cloud. Then a sparse convolutional neural network operating on the
coarse point cloud is launched. It utilizes the continuity/smoothness of the
coarse geometry to compress the latent features as an enhancement bit-stream
that greatly benefits the reconstruction quality. When this bit-stream is
unavailable, e.g., due to packet loss, we support a skip mode with the same
architecture which generates geometric details from the coarse point cloud
directly. Experimentation on both dense and sparse point clouds demonstrate the
state-of-the-art compression performance achieved by our proposal. Our code is
available at https://github.com/InterDigitalInc/GRASP-Net.
Related papers
- PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression [8.778300313732027]
We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
arXiv Detail & Related papers (2024-02-11T16:57:08Z) - Geometric Prior Based Deep Human Point Cloud Geometry Compression [67.49785946369055]
We leverage the human geometric prior in geometry redundancy removal of point clouds.
We can envisage high-resolution human point clouds as a combination of geometric priors and structural deviations.
The proposed framework can operate in a play-and-plug fashion with existing learning based point cloud compression methods.
arXiv Detail & Related papers (2023-05-02T10:35:20Z) - GQE-Net: A Graph-based Quality Enhancement Network for Point Cloud Color
Attribute [51.4803148196217]
We propose a graph-based quality enhancement network (GQE-Net) to reduce color distortion in point clouds.
GQE-Net uses geometry information as an auxiliary input and graph convolution blocks to extract local features efficiently.
Experimental results show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-24T02:33:45Z) - Learning Neural Volumetric Field for Point Cloud Geometry Compression [13.691147541041804]
We propose to code the geometry of a given point cloud by learning a neural field.
We divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code.
The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy.
arXiv Detail & Related papers (2022-12-11T19:55:24Z) - Density-preserving Deep Point Cloud Compression [72.0703956923403]
We propose a novel deep point cloud compression method that preserves local density information.
Our method works in an auto-encoder fashion: the encoder downsamples the points and learns point-wise features, while the decoder upsamples the points using these features.
arXiv Detail & Related papers (2022-04-27T03:42:15Z) - CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised
Point Cloud Learning [53.1436669083784]
We propose a generic Contour-Perturbed Reconstruction Network (CP-Net), which can effectively guide self-supervised reconstruction to learn semantic content in the point cloud.
For classification, we get a competitive result with the fully-supervised methods on ModelNet40 (92.5% accuracy) and ScanObjectNN (87.9% accuracy)
arXiv Detail & Related papers (2022-01-20T15:04:12Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Patch-Based Deep Autoencoder for Point Cloud Geometry Compression [8.44208490359453]
We propose a patch-based compression process using deep learning.
We divide the point cloud into patches and compress each patch independently.
In the decoding process, we finally assemble the decompressed patches into a complete point cloud.
arXiv Detail & Related papers (2021-10-18T08:59:57Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - Multiscale Point Cloud Geometry Compression [29.605320327889142]
We propose a multiscale-to-end learning framework which hierarchically reconstructs the 3D Point Cloud Geometry.
The framework is developed on top of a sparse convolution based autoencoder for point cloud compression and reconstruction.
arXiv Detail & Related papers (2020-11-07T16:11:16Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.