Online Meta Adaptation for Variable-Rate Learned Image Compression
- URL: http://arxiv.org/abs/2111.08256v1
- Date: Tue, 16 Nov 2021 06:46:23 GMT
- Title: Online Meta Adaptation for Variable-Rate Learned Image Compression
- Authors: Wei Jiang and Wei Wang and Songnan Li and Shan Liu
- Abstract summary: This work addresses two major issues of end-to-end learned image compression (LIC) based on deep neural networks.
We introduce an online meta-learning (OML) setting for LIC, which combines ideas from meta learning and online learning in the conditional variational auto-encoder framework.
- Score: 40.8361915315201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses two major issues of end-to-end learned image compression
(LIC) based on deep neural networks: variable-rate learning where separate
networks are required to generate compressed images with varying qualities, and
the train-test mismatch between differentiable approximate quantization and
true hard quantization. We introduce an online meta-learning (OML) setting for
LIC, which combines ideas from meta learning and online learning in the
conditional variational auto-encoder (CVAE) framework. By treating the
conditional variables as meta parameters and treating the generated conditional
features as meta priors, the desired reconstruction can be controlled by the
meta parameters to accommodate compression with variable qualities. The online
learning framework is used to update the meta parameters so that the
conditional reconstruction is adaptively tuned for the current image. Through
the OML mechanism, the meta parameters can be effectively updated through SGD.
The conditional reconstruction is directly based on the quantized latent
representation in the decoder network, and therefore helps to bridge the gap
between the training estimation and true quantized latent distribution.
Experiments demonstrate that our OML approach can be flexibly applied to
different state-of-the-art LIC methods to achieve additional performance
improvements with little computation and transmission overhead.
Related papers
- Modality-Agnostic Self-Supervised Learning with Meta-Learned Masked
Auto-Encoder [61.7834263332332]
We develop Masked Auto-Encoder (MAE) as a unified, modality-agnostic SSL framework.
We argue meta-learning as a key to interpreting MAE as a modality-agnostic learner.
Our experiment demonstrates the superiority of MetaMAE in the modality-agnostic SSL benchmark.
arXiv Detail & Related papers (2023-10-25T03:03:34Z) - Generalizing Supervised Deep Learning MRI Reconstruction to Multiple and
Unseen Contrasts using Meta-Learning Hypernetworks [1.376408511310322]
This work aims to develop a multimodal meta-learning model for image reconstruction.
Our proposed model has hypernetworks that evolve to generate mode-specific weights.
Experiments on MRI reconstruction show that our model exhibits superior reconstruction performance over joint training.
arXiv Detail & Related papers (2023-07-13T14:22:59Z) - MetaModulation: Learning Variational Feature Hierarchies for Few-Shot
Learning with Fewer Tasks [63.016244188951696]
We propose a method for few-shot learning with fewer tasks, which is by metaulation.
We modify parameters at various batch levels to increase the meta-training tasks.
We also introduce learning variational feature hierarchies by incorporating the variationalulation.
arXiv Detail & Related papers (2023-05-17T15:47:47Z) - Learning Visual Representation from Modality-Shared Contrastive
Language-Image Pre-training [88.80694147730883]
We investigate a variety of Modality-Shared Contrastive Language-Image Pre-training (MS-CLIP) frameworks.
In studied conditions, we observe that a mostly unified encoder for vision and language signals outperforms all other variations that separate more parameters.
Our approach outperforms vanilla CLIP by 1.6 points in linear probing on a collection of 24 downstream vision tasks.
arXiv Detail & Related papers (2022-07-26T05:19:16Z) - Bitwidth-Adaptive Quantization-Aware Neural Network Training: A
Meta-Learning Approach [6.122150357599037]
We propose a meta-learning approach to achieve deep neural network quantization with adaptive bitwidths.
MeBQAT allows the (meta-)trained model to be quantized to any candidate bitwidth then helps to conduct inference without much accuracy drop from quantization.
We experimentally demonstrate their validity in multiple QAT schemes.
arXiv Detail & Related papers (2022-07-20T20:39:39Z) - Continual Variational Autoencoder Learning via Online Cooperative
Memorization [11.540150938141034]
Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks.
However, their ability to generate images with specifications corresponding to the classes and databases learned during Continual Learning is not well understood.
We develop a new theoretical framework that formulates CL as a dynamic optimal transport problem.
We then propose a novel memory buffering approach, namely the Online Cooperative Memorization (OCM) framework.
arXiv Detail & Related papers (2022-07-20T18:19:27Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Learning to Learn Kernels with Variational Random Features [118.09565227041844]
We introduce kernels with random Fourier features in the meta-learning framework to leverage their strong few-shot learning ability.
We formulate the optimization of MetaVRF as a variational inference problem.
We show that MetaVRF delivers much better, or at least competitive, performance compared to existing meta-learning alternatives.
arXiv Detail & Related papers (2020-06-11T18:05:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.