ECM-OPCC: Efficient Context Model for Octree-based Point Cloud
Compression
- URL: http://arxiv.org/abs/2211.10916v4
- Date: Sat, 9 Dec 2023 11:01:05 GMT
- Title: ECM-OPCC: Efficient Context Model for Octree-based Point Cloud
Compression
- Authors: Yiqi Jin and Ziyu Zhu and Tongda Xu and Yuhuan Lin and Yan Wang
- Abstract summary: We propose a sufficient yet efficient context model and design an efficient deep learning for point clouds.
Specifically, we first propose a window-constrained multi-group coding strategy to exploit the autoregressive context.
We also propose a dual transformer architecture to utilize the dependency of current node on its ancestors and siblings.
- Score: 6.509720419113212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep learning methods have shown promising results in point cloud
compression. For octree-based point cloud compression, previous works show that
the information of ancestor nodes and sibling nodes are equally important for
predicting current node. However, those works either adopt insufficient context
or bring intolerable decoding complexity (e.g. >600s). To address this problem,
we propose a sufficient yet efficient context model and design an efficient
deep learning codec for point clouds. Specifically, we first propose a
window-constrained multi-group coding strategy to exploit the autoregressive
context while maintaining decoding efficiency. Then, we propose a dual
transformer architecture to utilize the dependency of current node on its
ancestors and siblings. We also propose a random-masking pre-train method to
enhance our model. Experimental results show that our approach achieves
state-of-the-art performance for both lossy and lossless point cloud
compression. Moreover, our multi-group coding strategy saves 98% decoding time
compared with previous octree-based compression method.
Related papers
- Point Cloud Compression with Bits-back Coding [32.9521748764196]
This paper specializes in using a deep learning-based probabilistic model to estimate the Shannon's entropy of the point cloud information.
Once the entropy of the point cloud dataset is estimated, we use the learned CVAE model to compress the geometric attributes of the point clouds.
The novelty of our method with bits-back coding specializes in utilizing the learned latent variable model of the CVAE to compress the point cloud data.
arXiv Detail & Related papers (2024-10-09T06:34:48Z) - PVContext: Hybrid Context Model for Point Cloud Compression [61.24130634750288]
We propose PVContext, a hybrid context model for effective octree-based point cloud compression.
PVContext comprises two components with distinct modalities: the Voxel Context, which accurately represents local geometric information using voxels, and the Point Context, which efficiently preserves global shape information from point clouds.
arXiv Detail & Related papers (2024-09-19T12:47:35Z) - End-to-end learned Lossy Dynamic Point Cloud Attribute Compression [5.717288278431968]
This study introduces an end-to-end learned dynamic lossy attribute coding approach.
We employ a context model that leverage previous latent space in conjunction with an auto-regressive context model for encoding the latent tensor into a bitstream.
arXiv Detail & Related papers (2024-08-20T09:06:59Z) - Efficient and Generic Point Model for Lossless Point Cloud Attribute Compression [28.316347464011056]
PoLoPCAC is an efficient and generic PCAC method that achieves high compression efficiency and strong generalizability simultaneously.
Our method can be instantly deployed once trained on a Synthetic 2k-ShapeNet dataset.
Experiments show that our method can enjoy continuous bit-rate reduction over the latest G-PCCv23 on various datasets.
arXiv Detail & Related papers (2024-04-10T11:40:02Z) - Geometric Prior Based Deep Human Point Cloud Geometry Compression [67.49785946369055]
We leverage the human geometric prior in geometry redundancy removal of point clouds.
We can envisage high-resolution human point clouds as a combination of geometric priors and structural deviations.
The proposed framework can operate in a play-and-plug fashion with existing learning based point cloud compression methods.
arXiv Detail & Related papers (2023-05-02T10:35:20Z) - Deep probabilistic model for lossless scalable point cloud attribute
compression [2.2559617939136505]
We build an end-to-end point cloud attribute coding method (MNeT) that progressively projects the attributes onto multiscale latent spaces.
We validate our method on a set of point clouds from MVUB and MPEG and show that our method outperforms recently proposed methods and on par with the latest G-PCC version 14.
arXiv Detail & Related papers (2023-03-11T23:39:30Z) - Point Cloud Compression with Sibling Context and Surface Priors [47.96018990521301]
We present a novel octree-based multi-level framework for large-scale point cloud compression.
In this framework, we propose a new entropy model that explores the hierarchical dependency in an octree.
We locally fit surfaces with a voxel-based geometry-aware module to provide geometric priors in entropy encoding.
arXiv Detail & Related papers (2022-05-02T09:13:26Z) - OctAttention: Octree-based Large-scale Contexts Model for Point Cloud
Compression [36.77271904751208]
OctAttention employs the octree structure, a memory-efficient representation for point clouds.
Our approach saves 95% coding time compared to the voxel-based baseline.
Compared to the previous state-of-the-art works, our approach obtains a 10%-35% BD-Rate gain on the LiDAR benchmark.
arXiv Detail & Related papers (2022-02-12T10:06:12Z) - Compressing Neural Networks: Towards Determining the Optimal Layer-wise
Decomposition [62.41259783906452]
We present a novel global compression framework for deep neural networks.
It automatically analyzes each layer to identify the optimal per-layer compression ratio.
Our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks.
arXiv Detail & Related papers (2021-07-23T20:01:30Z) - Permute, Quantize, and Fine-tune: Efficient Compression of Neural
Networks [70.0243910593064]
Key to success of vector quantization is deciding which parameter groups should be compressed together.
In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function.
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.
arXiv Detail & Related papers (2020-10-29T15:47:26Z) - OctSqueeze: Octree-Structured Entropy Model for LiDAR Compression [77.8842824702423]
We present a novel deep compression algorithm to reduce the memory footprint of LiDAR point clouds.
Our method exploits the sparsity and structural redundancy between points to reduce the memory footprint.
Our algorithm can be used to reduce the onboard and offboard storage of LiDAR points for applications such as self-driving cars.
arXiv Detail & Related papers (2020-05-14T17:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.