Bandwidth-Agile Image Transmission with Deep Joint Source-Channel Coding
- URL: http://arxiv.org/abs/2009.12480v2
- Date: Mon, 14 Jun 2021 00:37:10 GMT
- Title: Bandwidth-Agile Image Transmission with Deep Joint Source-Channel Coding
- Authors: David Burth Kurka and Deniz G\"und\"uz
- Abstract summary: We consider the scenario in which images are transmitted progressively in layers over time or frequency.
DeepJSCC-$l$ is an innovative solution that uses convolutional autoencoders.
DeepJSCC-$l$ has comparable performance with state of the art digital progressive transmission schemes in the challenging low signal-to-noise ratio (SNR) and small bandwidth regimes.
- Score: 7.081604594416339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose deep learning based communication methods for adaptive-bandwidth
transmission of images over wireless channels. We consider the scenario in
which images are transmitted progressively in layers over time or frequency,
and such layers can be aggregated by receivers in order to increase the quality
of their reconstructions. We investigate two scenarios, one in which the layers
are sent sequentially, and incrementally contribute to the refinement of a
reconstruction, and another in which the layers are independent and can be
retrieved in any order. Those scenarios correspond to the well known problems
of \textit{successive refinement} and \textit{multiple descriptions},
respectively, in the context of joint source-channel coding (JSCC). We propose
DeepJSCC-$l$, an innovative solution that uses convolutional autoencoders, and
present three architectures with different complexity trade-offs. To the best
of our knowledge, this is the first practical multiple-description JSCC scheme
developed and tested for practical information sources and channels. Numerical
results show that DeepJSCC-$l$ can learn to transmit the source progressively
with negligible losses in the end-to-end performance compared with a single
transmission. Moreover, DeepJSCC-$l$ has comparable performance with state of
the art digital progressive transmission schemes in the challenging low
signal-to-noise ratio (SNR) and small bandwidth regimes, with the additional
advantage of graceful degradation with channel SNR.
Related papers
- Learned Image Transmission with Hierarchical Variational Autoencoder [28.084648666081943]
We introduce an innovative hierarchical joint source-channel coding (HJSCC) framework for image transmission.
Our approach leverages a combination of bottom-up and top-down paths at the transmitter to autoregressively generate multiple hierarchical representations of the original image.
Our proposed model outperforms existing baselines in rate-distortion performance and maintains robustness against channel noise.
arXiv Detail & Related papers (2024-08-29T08:23:57Z) - Distributed Deep Joint Source-Channel Coding with Decoder-Only Side
Information [6.411633100057159]
We consider low-latency image transmission over a noisy wireless channel when correlated side information is present only at the receiver side.
We propose a novel neural network architecture that incorporates the decoder-only side information at multiple stages at the receiver side.
arXiv Detail & Related papers (2023-10-06T15:17:45Z) - CommIN: Semantic Image Communications as an Inverse Problem with
INN-Guided Diffusion Models [20.005671042281246]
We propose CommIN, which views the recovery of high-quality source images from degraded reconstructions as an inverse problem.
We show that our CommIN significantly improves the perceptual quality compared to DeepJSCC under extreme conditions.
arXiv Detail & Related papers (2023-10-02T12:06:58Z) - High Perceptual Quality Wireless Image Delivery with Denoising Diffusion
Models [10.763194436114194]
We consider the image transmission problem over a noisy wireless channel via deep learning-based joint source-channel coding (DeepJSCC)
We introduce a novel scheme that utilizes the range-null space decomposition of the target image.
We demonstrate significant improvements in distortion and perceptual quality of reconstructed images compared to standard DeepJSCC and the state-of-the-art generative learning-based method.
arXiv Detail & Related papers (2023-09-27T16:30:59Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - Perceptual Learned Source-Channel Coding for High-Fidelity Image
Semantic Transmission [7.692038874196345]
In this paper, we introduce adversarial losses to optimize deep J SCC.
Our new deep J SCC architecture combines encoder, wireless channel, decoder/generator, and discriminator.
A user study confirms that achieving the perceptually similar end-to-end image transmission quality, the proposed method can save about 50% wireless channel bandwidth cost.
arXiv Detail & Related papers (2022-05-26T03:05:13Z) - Adaptive Information Bottleneck Guided Joint Source and Channel Coding
for Image Transmission [132.72277692192878]
An adaptive information bottleneck (IB) guided joint source and channel coding (AIB-JSCC) is proposed for image transmission.
The goal of AIB-JSCC is to reduce the transmission rate while improving the image reconstruction quality.
Experimental results show that AIB-JSCC can significantly reduce the required amount of transmitted data and improve the reconstruction quality.
arXiv Detail & Related papers (2022-03-12T17:44:02Z) - Optical-Flow-Reuse-Based Bidirectional Recurrent Network for Space-Time
Video Super-Resolution [52.899234731501075]
Space-time video super-resolution (ST-VSR) simultaneously increases the spatial resolution and frame rate for a given video.
Existing methods typically suffer from difficulties in how to efficiently leverage information from a large range of neighboring frames.
We propose a coarse-to-fine bidirectional recurrent neural network instead of using ConvLSTM to leverage knowledge between adjacent frames.
arXiv Detail & Related papers (2021-10-13T15:21:30Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Channel-Level Variable Quantization Network for Deep Image Compression [50.3174629451739]
We propose a channel-level variable quantization network to dynamically allocate more convolutions for significant channels and withdraws for negligible channels.
Our method achieves superior performance and can produce much better visual reconstructions.
arXiv Detail & Related papers (2020-07-15T07:20:39Z) - Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral
Super-Resolution [79.97180849505294]
We propose a novel coupled unmixing network with a cross-attention mechanism, CUCaNet, to enhance the spatial resolution of HSI.
Experiments are conducted on three widely-used HS-MS datasets in comparison with state-of-the-art HSI-SR models.
arXiv Detail & Related papers (2020-07-10T08:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.