TINC: Tree-structured Implicit Neural Compression
- URL: http://arxiv.org/abs/2211.06689v4
- Date: Tue, 21 Mar 2023 06:24:38 GMT
- Title: TINC: Tree-structured Implicit Neural Compression
- Authors: Runzhao Yang, Tingxiong Xiao, Yuxiao Cheng, Jinli Suo, Qionghai Dai
- Abstract summary: Implicit neural representation (INR) can describe the target scenes with high fidelity using a small number of parameters.
Preliminary studies can only exploit either global or local correlation in the target data.
We propose a Tree-structured Implicit Neural Compression (TINC) to conduct compact representation for local regions.
- Score: 30.26398911800582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representation (INR) can describe the target scenes with high
fidelity using a small number of parameters, and is emerging as a promising
data compression technique. However, limited spectrum coverage is intrinsic to
INR, and it is non-trivial to remove redundancy in diverse complex data
effectively. Preliminary studies can only exploit either global or local
correlation in the target data and thus of limited performance. In this paper,
we propose a Tree-structured Implicit Neural Compression (TINC) to conduct
compact representation for local regions and extract the shared features of
these local representations in a hierarchical manner. Specifically, we use
Multi-Layer Perceptrons (MLPs) to fit the partitioned local regions, and these
MLPs are organized in tree structure to share parameters according to the
spatial distance. The parameter sharing scheme not only ensures the continuity
between adjacent regions, but also jointly removes the local and non-local
redundancy. Extensive experiments show that TINC improves the compression
fidelity of INR, and has shown impressive compression capabilities over
commercial tools and other deep learning based methods. Besides, the approach
is of high flexibility and can be tailored for different data and parameter
settings. The source code can be found at https://github.com/RichealYoung/TINC .
Related papers
- Attention Beats Linear for Fast Implicit Neural Representation Generation [13.203243059083533]
We propose Attention-based Localized INR (ANR) composed of a localized attention layer (LAL) and a global representation vector.
With instance-specific representation and instance-agnostic ANR parameters, the target signals are well reconstructed as a continuous function.
arXiv Detail & Related papers (2024-07-22T03:52:18Z) - MoEC: Mixture of Experts Implicit Neural Compression [25.455216041289432]
We propose MoEC, a novel implicit neural compression method based on the theory of mixture of experts.
Specifically, we use a gating network to automatically assign a specific INR to a 3D point in the scene.
Compared with block-wise and tree-structured partitions, our learnable partition can adaptively find the optimal partition in an end-to-end manner.
arXiv Detail & Related papers (2023-12-03T12:02:23Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - ECNR: Efficient Compressive Neural Representation of Time-Varying
Volumetric Datasets [6.3492793442257085]
compressive neural representation has emerged as a promising alternative to traditional compression methods for managing massive datasets.
This paper presents an efficient neural representation (ECNR) solution for time-varying data compression.
We show the effectiveness of ECNR with multiple datasets and compare it with state-of-the-art compression methods.
arXiv Detail & Related papers (2023-10-02T06:06:32Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Sparse Tensor-based Multiscale Representation for Point Cloud Geometry
Compression [18.24902526033056]
We develop a unified Point Cloud Geometry (PCG) compression method through Sparse Processing (STP) based multiscale representation of voxelized PCG.
Applying the complexity reduces the complexity significantly because it only performs the convolutions centered at Most-Probable Positively-Occupied Voxels (MP-POV)
The proposed method presents lightweight complexity due to point-wise, and tiny storage desire because of model sharing across all scales.
arXiv Detail & Related papers (2021-11-20T17:02:45Z) - Global Aggregation then Local Distribution for Scene Parsing [99.1095068574454]
We show that our approach can be modularized as an end-to-end trainable block and easily plugged into existing semantic segmentation networks.
Our approach allows us to build new state of the art on major semantic segmentation benchmarks including Cityscapes, ADE20K, Pascal Context, Camvid and COCO-stuff.
arXiv Detail & Related papers (2021-07-28T03:46:57Z) - Data-Driven Low-Rank Neural Network Compression [8.025818540338518]
We propose a Data-Driven Low-rank (DDLR) method to reduce the number of parameters of pretrained Deep Neural Networks (DNNs)
We show that it is possible to significantly reduce the number of parameters with only a small reduction in classification accuracy.
arXiv Detail & Related papers (2021-07-13T00:10:21Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.