Image De-Quantization Using Generative Models as Priors
- URL: http://arxiv.org/abs/2007.07923v2
- Date: Fri, 17 Jul 2020 21:40:45 GMT
- Title: Image De-Quantization Using Generative Models as Priors
- Authors: Kalliopi Basioti, George V. Moustakides
- Abstract summary: De-quantization is the task of reversing the quantization effect and recovering the original multi-chromatic level image.
We develop a de-quantization mechanism through a rigorous mathematical analysis which is based on the classical statistical estimation theory.
- Score: 4.467248776406006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image quantization is used in several applications aiming in reducing the
number of available colors in an image and therefore its size. De-quantization
is the task of reversing the quantization effect and recovering the original
multi-chromatic level image. Existing techniques achieve de-quantization by
imposing suitable constraints on the ideal image in order to make the recovery
problem feasible since it is otherwise ill-posed. Our goal in this work is to
develop a de-quantization mechanism through a rigorous mathematical analysis
which is based on the classical statistical estimation theory. In this effort
we incorporate generative modeling of the ideal image as a suitable prior
information. The resulting technique is simple and capable of de-quantizing
successfully images that have experienced severe quantization effects.
Interestingly, our method can recover images even if the quantization process
is not exactly known and contains unknown parameters.
Related papers
- Neural Image Compression with Quantization Rectifier [7.097091519502871]
We develop a novel quantization (QR) method for image compression that leverages image feature correlation to mitigate the impact of quantization.
Our method designs a neural network architecture that predicts unquantized features from the quantized ones.
In evaluation, we integrate QR into state-of-the-art neural image codecs and compare enhanced models and baselines on the widely-used Kodak benchmark.
arXiv Detail & Related papers (2024-03-25T22:26:09Z) - Image Inpainting via Tractable Steering of Diffusion Models [54.13818673257381]
This paper proposes to exploit the ability of Tractable Probabilistic Models (TPMs) to exactly and efficiently compute the constrained posterior.
Specifically, this paper adopts a class of expressive TPMs termed Probabilistic Circuits (PCs)
We show that our approach can consistently improve the overall quality and semantic coherence of inpainted images with only 10% additional computational overhead.
arXiv Detail & Related papers (2023-11-28T21:14:02Z) - Hybrid quantum transfer learning for crack image classification on NISQ
hardware [62.997667081978825]
We present an application of quantum transfer learning for detecting cracks in gray value images.
We compare the performance and training time of PennyLane's standard qubits with IBM's qasm_simulator and real backends.
arXiv Detail & Related papers (2023-07-31T14:45:29Z) - Towards Accurate Post-training Quantization for Diffusion Models [73.19871905102545]
We propose an accurate data-free post-training quantization framework of diffusion models (ADP-DM) for efficient image generation.
Our method outperforms the state-of-the-art post-training quantization of diffusion model by a sizable margin with similar computational cost.
arXiv Detail & Related papers (2023-05-30T04:00:35Z) - Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging [88.20869695803631]
We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
arXiv Detail & Related papers (2022-02-10T18:59:56Z) - Orthonormal Product Quantization Network for Scalable Face Image
Retrieval [14.583846619121427]
This paper integrates product quantization with orthonormal constraints into an end-to-end deep learning framework to retrieve face images.
A novel scheme that uses predefined orthonormal vectors as codewords is proposed to enhance the quantization informativeness and reduce codewords' redundancy.
Experiments are conducted on four commonly-used face datasets under both seen and unseen identities retrieval settings.
arXiv Detail & Related papers (2021-07-01T09:30:39Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Regularization by Denoising Sub-sampled Newton Method for Spectral CT
Multi-Material Decomposition [78.37855832568569]
We propose to solve a model-based maximum-a-posterior problem to reconstruct multi-materials images with application to spectral CT.
In particular, we propose to solve a regularized optimization problem based on a plug-in image-denoising function.
We show numerical and experimental results for spectral CT materials decomposition.
arXiv Detail & Related papers (2021-03-25T15:20:10Z) - Image Restoration from Parametric Transformations using Generative
Models [4.467248776406006]
We develop optimum techniques for various image restoration problems using generative models.
Our approach is capable of restoring images that are distorted by transformations even when the latter contain unknown parameters.
We extend our method to accommodate mixtures of multiple images where each image is described by its own generative model.
arXiv Detail & Related papers (2020-05-27T01:14:40Z) - Experimental realization of a quantum image classifier via
tensor-network-based machine learning [4.030017427802459]
We demonstrate highly successful classifications of real-life images using photonic qubits.
We focus on binary classification for hand-written zeroes and ones, whose features are cast into the tensor-network representation.
Our scheme can be scaled to efficient multi-qubit encodings of features in the tensor-product representation.
arXiv Detail & Related papers (2020-03-19T03:26:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.