A Relay System for Semantic Image Transmission based on Shared Feature
Extraction and Hyperprior Entropy Compression
- URL: http://arxiv.org/abs/2311.10492v1
- Date: Fri, 17 Nov 2023 12:45:30 GMT
- Title: A Relay System for Semantic Image Transmission based on Shared Feature
Extraction and Hyperprior Entropy Compression
- Authors: Wannian An, Zhicheng Bao, Haotai Liang, Chen Dong, and Xiaodong
- Abstract summary: This paper proposes a relay communication network for semantic image transmission based on shared feature extraction and hyperprior entropy compression.
Experimental results demonstrate that compared with other recent research methods, the proposed system has lower transmission overhead and higher semantic image transmission performance.
- Score: 10.094327559669859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, the need for high-quality image reconstruction and restoration is
more and more urgent. However, most image transmission systems may suffer from
image quality degradation or transmission interruption in the face of
interference such as channel noise and link fading. To solve this problem, a
relay communication network for semantic image transmission based on shared
feature extraction and hyperprior entropy compression (HEC) is proposed, where
the shared feature extraction technology based on Pearson correlation is
proposed to eliminate partial shared feature of extracted semantic latent
feature. In addition, the HEC technology is used to resist the effect of
channel noise and link fading and carried out respectively at the source node
and the relay node. Experimental results demonstrate that compared with other
recent research methods, the proposed system has lower transmission overhead
and higher semantic image transmission performance. Particularly, under the
same conditions, the multi-scale structural similarity (MS-SSIM) of this system
is superior to the comparison method by approximately 0.2.
Related papers
- SC-CDM: Enhancing Quality of Image Semantic Communication with a Compact Diffusion Model [27.462224078883786]
We propose a generative SC for wireless image transmission (denoted as SC-CDM)
We aim to redesign the swin Transformer as a new backbone for efficient semantic feature extraction and compression.
We further increase the Peak Signal-to-Noise Ratio (PSNR) by over 17% on top of CNN-based DeepJSCC.
arXiv Detail & Related papers (2024-10-03T01:01:04Z) - Semantic Successive Refinement: A Generative AI-aided Semantic Communication Framework [27.524671767937512]
We introduce a novel Generative AI Semantic Communication (GSC) system for single-user scenarios.
At the transmitter end, it employs a joint source-channel coding mechanism based on the Swin Transformer for efficient semantic feature extraction.
At the receiver end, an advanced Diffusion Model (DM) reconstructs high-quality images from degraded signals, enhancing perceptual details.
arXiv Detail & Related papers (2024-07-31T06:08:51Z) - Diffusion-Aided Joint Source Channel Coding For High Realism Wireless Image Transmission [24.372996233209854]
DiffJSCC is a novel framework that produces high-realism images via the conditional diffusion denoising process.
It can achieve highly realistic reconstructions for 768x512 pixel Kodak images with only 3072 symbols.
arXiv Detail & Related papers (2024-04-27T00:12:13Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - A cross Transformer for image denoising [83.68175077524111]
We propose a cross Transformer denoising CNN (CTNet) with a serial block (SB), a parallel block (PB), and a residual block (RB)
CTNet is superior to some popular denoising methods in terms of real and synthetic image denoising.
arXiv Detail & Related papers (2023-10-16T13:53:19Z) - CommIN: Semantic Image Communications as an Inverse Problem with
INN-Guided Diffusion Models [20.005671042281246]
We propose CommIN, which views the recovery of high-quality source images from degraded reconstructions as an inverse problem.
We show that our CommIN significantly improves the perceptual quality compared to DeepJSCC under extreme conditions.
arXiv Detail & Related papers (2023-10-02T12:06:58Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Adaptive Information Bottleneck Guided Joint Source and Channel Coding
for Image Transmission [132.72277692192878]
An adaptive information bottleneck (IB) guided joint source and channel coding (AIB-JSCC) is proposed for image transmission.
The goal of AIB-JSCC is to reduce the transmission rate while improving the image reconstruction quality.
Experimental results show that AIB-JSCC can significantly reduce the required amount of transmitted data and improve the reconstruction quality.
arXiv Detail & Related papers (2022-03-12T17:44:02Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.