PVContext: Hybrid Context Model for Point Cloud Compression
- URL: http://arxiv.org/abs/2409.12724v1
- Date: Thu, 19 Sep 2024 12:47:35 GMT
- Title: PVContext: Hybrid Context Model for Point Cloud Compression
- Authors: Guoqing Zhang, Wenbo Zhao, Jian Liu, Yuanchao Bai, Junjun Jiang, Xianming Liu,
- Abstract summary: We propose PVContext, a hybrid context model for effective octree-based point cloud compression.
PVContext comprises two components with distinct modalities: the Voxel Context, which accurately represents local geometric information using voxels, and the Point Context, which efficiently preserves global shape information from point clouds.
- Score: 61.24130634750288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient storage of large-scale point cloud data has become increasingly challenging due to advancements in scanning technology. Recent deep learning techniques have revolutionized this field; However, most existing approaches rely on single-modality contexts, such as octree nodes or voxel occupancy, limiting their ability to capture information across large regions. In this paper, we propose PVContext, a hybrid context model for effective octree-based point cloud compression. PVContext comprises two components with distinct modalities: the Voxel Context, which accurately represents local geometric information using voxels, and the Point Context, which efficiently preserves global shape information from point clouds. By integrating these two contexts, we retain detailed information across large areas while controlling the context size. The combined context is then fed into a deep entropy model to accurately predict occupancy. Experimental results demonstrate that, compared to G-PCC, our method reduces the bitrate by 37.95\% on SemanticKITTI LiDAR point clouds and by 48.98\% and 36.36\% on dense object point clouds from MPEG 8i and MVUB, respectively.
Related papers
- Point Cloud Compression with Implicit Neural Representations: A Unified Framework [54.119415852585306]
We present a pioneering point cloud compression framework capable of handling both geometry and attribute components.
Our framework utilizes two coordinate-based neural networks to implicitly represent a voxelized point cloud.
Our method exhibits high universality when contrasted with existing learning-based techniques.
arXiv Detail & Related papers (2024-05-19T09:19:40Z) - PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression [8.778300313732027]
We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
arXiv Detail & Related papers (2024-02-11T16:57:08Z) - Lightweight super resolution network for point cloud geometry
compression [34.42460388539782]
We present an approach for compressing point cloud geometry by leveraging a lightweight super-resolution network.
The proposed method involves decomposing a point cloud into a base point cloud and the patterns for reconstructing the original point cloud.
Experiments on MPEG Cat1 (Solid) and Cat2 datasets demonstrate the remarkable compression performance achieved by our method.
arXiv Detail & Related papers (2023-11-02T03:34:51Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - CPCM: Contextual Point Cloud Modeling for Weakly-supervised Point Cloud
Semantic Segmentation [60.0893353960514]
We study the task of weakly-supervised point cloud semantic segmentation with sparse annotations.
We propose a Contextual Point Cloud Modeling ( CPCM) method that consists of two parts: a region-wise masking (RegionMask) strategy and a contextual masked training (CMT) method.
arXiv Detail & Related papers (2023-07-19T04:41:18Z) - Point Cloud Compression with Sibling Context and Surface Priors [47.96018990521301]
We present a novel octree-based multi-level framework for large-scale point cloud compression.
In this framework, we propose a new entropy model that explores the hierarchical dependency in an octree.
We locally fit surfaces with a voxel-based geometry-aware module to provide geometric priors in entropy encoding.
arXiv Detail & Related papers (2022-05-02T09:13:26Z) - OctAttention: Octree-based Large-scale Contexts Model for Point Cloud
Compression [36.77271904751208]
OctAttention employs the octree structure, a memory-efficient representation for point clouds.
Our approach saves 95% coding time compared to the voxel-based baseline.
Compared to the previous state-of-the-art works, our approach obtains a 10%-35% BD-Rate gain on the LiDAR benchmark.
arXiv Detail & Related papers (2022-02-12T10:06:12Z) - VoxelContext-Net: An Octree based Framework for Point Cloud Compression [20.335998518653543]
We propose a two-stage deep learning framework called VoxelContext-Net for both static and dynamic point cloud compression.
We first extract the local voxel representation that encodes the spatial neighbouring context information for each node in the constructed octree.
In the entropy coding stage, we propose a voxel context based deep entropy model to compress the symbols of non-leaf nodes.
arXiv Detail & Related papers (2021-05-05T16:12:48Z) - Cascaded Refinement Network for Point Cloud Completion [74.80746431691938]
We propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes.
Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set.
We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution.
arXiv Detail & Related papers (2020-04-07T13:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.