PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression
- URL: http://arxiv.org/abs/2402.07243v1
- Date: Sun, 11 Feb 2024 16:57:08 GMT
- Title: PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression
- Authors: Jiahao Pang, Kevin Bui, Dong Tian
- Abstract summary: We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
- Score: 8.778300313732027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The universality of the point cloud format enables many 3D applications,
making the compression of point clouds a critical phase in practice. Sampled as
discrete 3D points, a point cloud approximates 2D surface(s) embedded in 3D
with a finite bit-depth. However, the point distribution of a practical point
cloud changes drastically as its bit-depth increases, requiring different
methodologies for effective consumption/analysis. In this regard, a
heterogeneous point cloud compression (PCC) framework is proposed. We unify
typical point cloud representations -- point-based, voxel-based, and tree-based
representations -- and their associated backbones under a learning-based
framework to compress an input point cloud at different bit-depth levels.
Having recognized the importance of voxel-domain processing, we augment the
framework with a proposed context-aware upsampling for decoding and an enhanced
voxel transformer for feature aggregation. Extensive experimentation
demonstrates the state-of-the-art performance of our proposal on a wide range
of point clouds.
Related papers
- Point Cloud Compression with Implicit Neural Representations: A Unified Framework [54.119415852585306]
We present a pioneering point cloud compression framework capable of handling both geometry and attribute components.
Our framework utilizes two coordinate-based neural networks to implicitly represent a voxelized point cloud.
Our method exhibits high universality when contrasted with existing learning-based techniques.
arXiv Detail & Related papers (2024-05-19T09:19:40Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud
Compression [16.98171403698783]
We propose a heterogeneous approach with deep learning for lossy point cloud geometry compression.
Specifically, a point-based network is applied to convert the erratic local details to latent features residing on the coarse point cloud.
arXiv Detail & Related papers (2022-09-09T17:09:02Z) - IPDAE: Improved Patch-Based Deep Autoencoder for Lossy Point Cloud
Geometry Compression [11.410441760314564]
We propose a set of significant improvements to patch-based point cloud compression.
Experiments show that the improved patch-based autoencoder outperforms the state-of-the-art in terms of rate-distortion performance.
arXiv Detail & Related papers (2022-08-04T08:12:35Z) - Deep Geometry Post-Processing for Decompressed Point Clouds [32.72083309729585]
Point cloud compression plays a crucial role in reducing the huge cost of data storage and transmission.
We propose a novel learning-based post-processing method to enhance the decompressed point clouds.
Experimental results show that the proposed method can significantly improve the quality of the decompressed point clouds.
arXiv Detail & Related papers (2022-04-29T08:57:03Z) - Density-preserving Deep Point Cloud Compression [72.0703956923403]
We propose a novel deep point cloud compression method that preserves local density information.
Our method works in an auto-encoder fashion: the encoder downsamples the points and learns point-wise features, while the decoder upsamples the points using these features.
arXiv Detail & Related papers (2022-04-27T03:42:15Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Variable Rate Compression for Raw 3D Point Clouds [5.107705550575662]
We propose a novel variable rate deep compression architecture that operates on raw 3D point cloud data.
Our network is capable of explicitly processing point clouds and generating a compressed description.
arXiv Detail & Related papers (2022-02-28T15:15:39Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.