Point Cloud Compression with Implicit Neural Representations: A Unified Framework
- URL: http://arxiv.org/abs/2405.11493v1
- Date: Sun, 19 May 2024 09:19:40 GMT
- Title: Point Cloud Compression with Implicit Neural Representations: A Unified Framework
- Authors: Hongning Ruan, Yulin Shao, Qianqian Yang, Liang Zhao, Dusit Niyato,
- Abstract summary: We present a pioneering point cloud compression framework capable of handling both geometry and attribute components.
Our framework utilizes two coordinate-based neural networks to implicitly represent a voxelized point cloud.
Our method exhibits high universality when contrasted with existing learning-based techniques.
- Score: 54.119415852585306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds have become increasingly vital across various applications thanks to their ability to realistically depict 3D objects and scenes. Nevertheless, effectively compressing unstructured, high-precision point cloud data remains a significant challenge. In this paper, we present a pioneering point cloud compression framework capable of handling both geometry and attribute components. Unlike traditional approaches and existing learning-based methods, our framework utilizes two coordinate-based neural networks to implicitly represent a voxelized point cloud. The first network generates the occupancy status of a voxel, while the second network determines the attributes of an occupied voxel. To tackle an immense number of voxels within the volumetric space, we partition the space into smaller cubes and focus solely on voxels within non-empty cubes. By feeding the coordinates of these voxels into the respective networks, we reconstruct the geometry and attribute components of the original point cloud. The neural network parameters are further quantized and compressed. Experimental results underscore the superior performance of our proposed method compared to the octree-based approach employed in the latest G-PCC standards. Moreover, our method exhibits high universality when contrasted with existing learning-based techniques.
Related papers
- PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression [8.778300313732027]
We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
arXiv Detail & Related papers (2024-02-11T16:57:08Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - Voxel or Pillar: Exploring Efficient Point Cloud Representation for 3D
Object Detection [49.324070632356296]
We develop a sparse voxel-pillar encoder that encodes point clouds into voxel and pillar features through 3D and 2D sparse convolutions respectively.
Our efficient, fully sparse method can be seamlessly integrated into both dense and sparse detectors.
arXiv Detail & Related papers (2023-04-06T05:00:58Z) - Learning Neural Volumetric Field for Point Cloud Geometry Compression [13.691147541041804]
We propose to code the geometry of a given point cloud by learning a neural field.
We divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code.
The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy.
arXiv Detail & Related papers (2022-12-11T19:55:24Z) - Point Cloud Semantic Segmentation using Multi Scale Sparse Convolution
Neural Network [0.0]
We propose a feature extraction module based on multi-scale ultra-sparse convolution and a feature selection module based on channel attention.
By introducing multi-scale sparse convolution, network could capture richer feature information based on convolution kernels of different sizes.
arXiv Detail & Related papers (2022-05-03T15:01:20Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Voxel-based Network for Shape Completion by Leveraging Edge Generation [76.23436070605348]
We develop a voxel-based network for point cloud completion by leveraging edge generation (VE-PCN)
We first embed point clouds into regular voxel grids, and then generate complete objects with the help of the hallucinated shape edges.
This decoupled architecture together with a multi-scale grid feature learning is able to generate more realistic on-surface details.
arXiv Detail & Related papers (2021-08-23T05:10:29Z) - VoxelContext-Net: An Octree based Framework for Point Cloud Compression [20.335998518653543]
We propose a two-stage deep learning framework called VoxelContext-Net for both static and dynamic point cloud compression.
We first extract the local voxel representation that encodes the spatial neighbouring context information for each node in the constructed octree.
In the entropy coding stage, we propose a voxel context based deep entropy model to compress the symbols of non-leaf nodes.
arXiv Detail & Related papers (2021-05-05T16:12:48Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - Progressive Point Cloud Deconvolution Generation Network [37.50448637246364]
We propose an effective point cloud generation method, which can generate multi-resolution point clouds of the same shape from a latent vector.
By concatenating different resolutions of local and global feature maps, we employ the multi-layer perceptron as the generation network to generate multi-resolution point clouds.
In order to keep the shapes of different resolutions of point clouds consistent, we propose a shape-preserving adversarial loss to train the point cloud deconvolution generation network.
arXiv Detail & Related papers (2020-07-10T13:07:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.