Generative Decompression: Optimal Lossy Decoding Against Distribution Mismatch
- URL: http://arxiv.org/abs/2602.03505v1
- Date: Tue, 03 Feb 2026 13:25:23 GMT
- Title: Generative Decompression: Optimal Lossy Decoding Against Distribution Mismatch
- Authors: Saeed R. Khosravirad, Ahmed Alkhateeb, Ingrid van de Voorde,
- Abstract summary: This paper addresses optimal decoding strategies in lossy compression where the assumed distribution for compressor design mismatches the actual (true) distribution of the source.<n>We formally define the mismatched quantization problem, demonstrating that the optimal reconstruction rule, termed generative decompression, aligns with classical Bayesian estimation.<n>We extend this framework to transmission over noisy channels, deriving a robust soft-decoding rule that quantifies the inefficiency of standard modular source--channel separation architectures under mismatch.
- Score: 20.28600385545221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses optimal decoding strategies in lossy compression where the assumed distribution for compressor design mismatches the actual (true) distribution of the source. This problem has immediate relevance in standardized communication systems where the decoder acquires side information or priors about the true distribution that are unavailable to the fixed encoder. We formally define the mismatched quantization problem, demonstrating that the optimal reconstruction rule, termed generative decompression, aligns with classical Bayesian estimation by taking the conditional expectation under the true distribution given the quantization indices and adapting it to fixed-encoder constraints. This strategy effectively performs a generative Bayesian correction on the decoder side, strictly outperforming the conventional centroid rule. We extend this framework to transmission over noisy channels, deriving a robust soft-decoding rule that quantifies the inefficiency of standard modular source--channel separation architectures under mismatch. Furthermore, we generalize the approach to task-oriented decoding, showing that the optimal strategy shifts from conditional mean estimation to maximum a posteriori (MAP) detection. Experimental results on Gaussian sources and deep-learning-based semantic classification demonstrate that generative decompression closes a vast majority of the performance gap to the ideal joint-optimization benchmark, enabling adaptive, high-fidelity reconstruction without modifying the encoder.
Related papers
- A Model-Driven Lossless Compression Algorithm Resistant to Mismatch [2.7930955543692817]
We propose a new compression algorithm based on next-token prediction that is robust to arbitrarily large, but structured, prediction mismatches.<n>Our results demonstrate reliable operation within the certified mismatch regime while achieving compression ratios that exceed those of commonly used compression methods.
arXiv Detail & Related papers (2026-01-25T04:07:21Z) - Joint Source-Channel-Generation Coding: From Distortion-oriented Reconstruction to Semantic-consistent Generation [58.67925548779465]
We propose Joint Source-Channel-Generation Coding (JSCGC), a novel paradigm that shifts the focus from perceptual reconstruction to probabilistic generation.<n>JSCGC improves substantially semantic quality and semantic fidelity, significantly outperforming conventional distortion-oriented J SCC methods.
arXiv Detail & Related papers (2026-01-19T08:12:47Z) - Generalization Bounds for Transformer Channel Decoders [61.55280736553095]
This paper studies the generalization performance of ECCT from a learning-theoretic perspective.<n>To the best of our knowledge, this work provides the first theoretical generalization guarantees for this class of decoders.
arXiv Detail & Related papers (2026-01-11T15:56:37Z) - Overcoming Joint Intractability with Lossless Hierarchical Speculative Decoding [58.92526489742584]
We propose provably lossless.<n> verification method that significantly boosts the expected number of accepted tokens.<n>We show that HSD yields consistent improvements in acceptance rates across diverse model families and benchmarks.
arXiv Detail & Related papers (2026-01-09T11:10:29Z) - Universal Representations for Classification-enhanced Lossy Compression [3.3838477077773925]
In lossy compression, the classical tradeoff between compression rate and reconstruction distortion has traditionally been guided algorithm design.<n>More recently, the rate-distortion-classification function was investigated in [19], evaluating compression performance by considering classification accuracy alongside distortion.<n>We explore universal representations, where a single encoder is developed to achieve multiple decoding objectives across various distortion and classification constraints.
arXiv Detail & Related papers (2025-04-12T00:55:56Z) - Threshold Selection for Iterative Decoding of $(v,w)$-regular Binary Codes [84.0257274213152]
Iterative bit flipping decoders are an efficient choice for sparse $(v,w)$-regular codes.<n>We propose concrete criteria for threshold determination, backed by a closed form model.
arXiv Detail & Related papers (2025-01-23T17:38:22Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Deep Joint Source-Channel Coding with Iterative Source Error Correction [11.41076729592696]
We propose an iterative source error correction (ISEC) decoding scheme for deep-learning-based joint source-channel code (Deep J SCC)
Given a noisyword received through the channel, we use a Deep J SCC encoder and decoder pair to update the code iteratively.
The proposed scheme produces more reliable source reconstruction results compared to the baseline when the channel noise characteristics do not match the ones used during training.
arXiv Detail & Related papers (2023-02-17T22:50:58Z) - End-to-end optimized image compression with competition of prior
distributions [29.585370305561582]
We propose a compression scheme that uses a single convolutional autoencoder and multiple learned prior distributions.
Our method offers rate-distortion performance comparable to that obtained with a predicted parametrized prior.
arXiv Detail & Related papers (2021-11-17T15:04:01Z) - Neural Distributed Source Coding [59.630059301226474]
We present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions.
We evaluate our method on multiple datasets and show that our method can handle complex correlations and state-of-the-art PSNR.
arXiv Detail & Related papers (2021-06-05T04:50:43Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.