Joint Source-Channel-Generation Coding: From Distortion-oriented Reconstruction to Semantic-consistent Generation
- URL: http://arxiv.org/abs/2601.12808v1
- Date: Mon, 19 Jan 2026 08:12:47 GMT
- Title: Joint Source-Channel-Generation Coding: From Distortion-oriented Reconstruction to Semantic-consistent Generation
- Authors: Tong Wu, Zhiyong Chen, Guo Lu, Li Song, Feng Yang, Meixia Tao, Wenjun Zhang,
- Abstract summary: We propose Joint Source-Channel-Generation Coding (JSCGC), a novel paradigm that shifts the focus from perceptual reconstruction to probabilistic generation.<n>JSCGC improves substantially semantic quality and semantic fidelity, significantly outperforming conventional distortion-oriented J SCC methods.
- Score: 58.67925548779465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional communication systems, including both separation-based coding and AI-driven joint source-channel coding (JSCC), are largely guided by Shannon's rate-distortion theory. However, relying on generic distortion metrics fails to capture complex human visual perception, often resulting in blurred or unrealistic reconstructions. In this paper, we propose Joint Source-Channel-Generation Coding (JSCGC), a novel paradigm that shifts the focus from deterministic reconstruction to probabilistic generation. JSCGC leverages a generative model at the receiver as a generator rather than a conventional decoder to parameterize the data distribution, enabling direct maximization of mutual information under channel constraints while controlling stochastic sampling to produce outputs residing on the authentic data manifold with high fidelity. We further derive a theoretical lower bound on the maximum semantic inconsistency with given transmitted mutual information, elucidating the fundamental limits of communication in controlling the generative process. Extensive experiments on image transmission demonstrate that JSCGC substantially improves perceptual quality and semantic fidelity, significantly outperforming conventional distortion-oriented JSCC methods.
Related papers
- Generalization Bounds for Transformer Channel Decoders [61.55280736553095]
This paper studies the generalization performance of ECCT from a learning-theoretic perspective.<n>To the best of our knowledge, this work provides the first theoretical generalization guarantees for this class of decoders.
arXiv Detail & Related papers (2026-01-11T15:56:37Z) - DiT-JSCC: Rethinking Deep JSCC with Diffusion Transformers and Semantic Representations [32.904008725578606]
Generative joint source-channel coding (GJSCC) has emerged as a new Deep J SCC paradigm.<n>We propose DiT-JSCC, a novel GJSCC backbone that can jointly learn a semantics-prioritized representation encoder and a diffusion transformer (DiT) based generative decoder.<n>We show that DiT-JSCC consistently outperforms existing J SCC methods in both semantic consistency and visual quality, particularly in extreme regimes.
arXiv Detail & Related papers (2026-01-06T15:42:45Z) - SecDiff: Diffusion-Aided Secure Deep Joint Source-Channel Coding Against Adversarial Attacks [73.41290017870097]
SecDiff is a plug-and-play, diffusion-aided decoding framework.<n>It significantly enhances the security and robustness of deep J SCC under adversarial wireless environments.
arXiv Detail & Related papers (2025-11-03T11:24:06Z) - Channel Fingerprint Construction for Massive MIMO: A Deep Conditional Generative Approach [65.47969413708344]
We introduce the concept of CF twins and design a conditional generative diffusion model (CGDM)<n>We employ a variational inference technique to derive the evidence lower bound (ELBO) for the log-marginal distribution of the observed fine-grained CF conditioned on the coarse-grained CF.<n>We show that the proposed approach exhibits significant improvement in reconstruction performance compared to the baselines.
arXiv Detail & Related papers (2025-05-12T01:36:06Z) - SING: Semantic Image Communications using Null-Space and INN-Guided Diffusion Models [52.40011613324083]
Joint source-channel coding systems (DeepJSCC) have recently demonstrated remarkable performance in wireless image transmission.<n>Existing methods focus on minimizing distortion between the transmitted image and the reconstructed version at the receiver, often overlooking perceptual quality.<n>We propose SING, a novel framework that formulates the recovery of high-quality images from corrupted reconstructions as an inverse problem.
arXiv Detail & Related papers (2025-03-16T12:32:11Z) - Diffusion-Driven Semantic Communication for Generative Models with Bandwidth Constraints [66.63250537475973]
This paper introduces a diffusion-driven semantic communication framework with advanced VAE-based compression for bandwidth-constrained generative model.<n>Our experimental results demonstrate significant improvements in pixel-level metrics like peak signal to noise ratio (PSNR) and semantic metrics like learned perceptual image patch similarity (LPIPS)
arXiv Detail & Related papers (2024-07-26T02:34:25Z) - Rateless Stochastic Coding for Delay-Constrained Semantic Communication [5.882972817816777]
We consider the problem of joint source-channel coding for semantic communication from a rateless perspective.<n>We propose a more general communication objective that minimizes the perceptual distance by incorporating a semantic-level reconstruction objective.<n>We show that the proposed rateless distortion coding scheme can achieve variable rates of transmission maintaining an excellent trade-off between distortion and perception.
arXiv Detail & Related papers (2024-06-28T10:27:06Z) - The Rate-Distortion-Perception-Classification Tradeoff: Joint Source Coding and Modulation via Inverse-Domain GANs [4.735670734773145]
We show the existence of a strict tradeoff between channel rate, distortion perception, and classification accuracy.
We propose two image compression methods to navigate that tradeoff: theCO algorithm and ID-GAN, which is more general compression.
They also demonstrate that the proposed ID-GAN algorithm balances image distortion, perception, classification accuracy, and significantly outperforms traditional separation-based methods.
arXiv Detail & Related papers (2023-12-22T16:06:43Z) - Generative Joint Source-Channel Coding for Semantic Image Transmission [29.738666406095074]
Joint source-channel coding (JSCC) schemes using deep neural networks (DNNs) provide promising results in wireless image transmission.
We propose two novel J SCC schemes that leverage the perceptual quality of deep generative models (DGMs) for wireless image transmission.
arXiv Detail & Related papers (2022-11-24T19:14:27Z) - Nonlinear Transform Source-Channel Coding for Semantic Communications [7.81628437543759]
We propose a new class of high-efficient deep joint source-channel coding methods that can closely adapt to the source distribution under the nonlinear transform.
Our model incorporates the nonlinear transform as a strong prior to effectively extract the source semantic features.
Notably, the proposed NTSCC method can potentially support future semantic communications due to its vigorous content-aware ability.
arXiv Detail & Related papers (2021-12-21T03:30:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.