Toward Adaptive Semantic Communications: Efficient Data Transmission via
Online Learned Nonlinear Transform Source-Channel Coding
- URL: http://arxiv.org/abs/2211.04339v3
- Date: Wed, 24 May 2023 15:13:21 GMT
- Title: Toward Adaptive Semantic Communications: Efficient Data Transmission via
Online Learned Nonlinear Transform Source-Channel Coding
- Authors: Jincheng Dai, Sixian Wang, Ke Yang, Kailin Tan, Xiaoqi Qin, Zhongwei
Si, Kai Niu, Ping Zhang
- Abstract summary: We propose an online learned joint source and channel coding approach that leverages the deep learning model's overfitting property.
Specifically, we update the off-the-shelf pre-trained models after deployment in a lightweight online fashion to adapt to the distribution shifts in source data and environment domain.
We take the overfitting concept to the extreme, proposing a series of implementation-friendly methods to adapt the model or representations to an individual data or channel state instance.
- Score: 11.101344530143303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emerging field semantic communication is driving the research of
end-to-end data transmission. By utilizing the powerful representation ability
of deep learning models, learned data transmission schemes have exhibited
superior performance than the established source and channel coding methods.
While, so far, research efforts mainly concentrated on architecture and model
improvements toward a static target domain. Despite their successes, such
learned models are still suboptimal due to the limitations in model capacity
and imperfect optimization and generalization, particularly when the testing
data distribution or channel response is different from that adopted for model
training, as is likely to be the case in real-world. To tackle this, we propose
a novel online learned joint source and channel coding approach that leverages
the deep learning model's overfitting property. Specifically, we update the
off-the-shelf pre-trained models after deployment in a lightweight online
fashion to adapt to the distribution shifts in source data and environment
domain. We take the overfitting concept to the extreme, proposing a series of
implementation-friendly methods to adapt the codec model or representations to
an individual data or channel state instance, which can further lead to
substantial gains in terms of the bandwidth ratio-distortion performance. The
proposed methods enable the communication-efficient adaptation for all
parameters in the network without sacrificing decoding speed. Our experiments,
including user study, on continually changing target source data and wireless
channel environments, demonstrate the effectiveness and efficiency of our
approach, on which we outperform existing state-of-the-art engineered
transmission scheme (VVC combined with 5G LDPC coded transmission).
Related papers
- Modeling of Time-varying Wireless Communication Channel with Fading and Shadowing [0.0]
We propose a new approach that combines a deep learning neural network with a mixture density network model to derive the conditional probability density function of receiving power.
Experiments on Nakagami fading channel model and Log-normal shadowing channel model with path loss and noise show that the new approach is more statistically accurate, faster, and more robust than the previous deep learning-based channel models.
arXiv Detail & Related papers (2024-05-13T21:30:50Z) - Diffusion-based Neural Network Weights Generation [85.6725307453325]
We propose an efficient and adaptive transfer learning scheme through dataset-conditioned pretrained weights sampling.
Specifically, we use a latent diffusion model with a variational autoencoder that can reconstruct the neural network weights.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Joint Task and Data Oriented Semantic Communications: A Deep Separate
Source-channel Coding Scheme [17.4244108919728]
To serve both the data transmission and semantic tasks, joint data compression and semantic analysis has become pivotal issue in semantic communications.
This paper proposes a deep separate source-channel coding framework for the joint task and data oriented semantic communications.
An iterative training algorithm is proposed to tackle the overfitting issue of deep learning models.
arXiv Detail & Related papers (2023-02-27T08:34:37Z) - Rethinking the Role of Pre-Trained Networks in Source-Free Domain
Adaptation [26.481422574715126]
Source-free domain adaptation (SFDA) aims to adapt a source model trained on a fully-labeled source domain to an unlabeled target domain.
Large-data pre-trained networks are used to initialize source models during source training, and subsequently discarded.
We propose to integrate the pre-trained network into the target adaptation process as it has diversified features important for generalization.
arXiv Detail & Related papers (2022-12-15T02:25:22Z) - Nonlinear Transform Source-Channel Coding for Semantic Communications [7.81628437543759]
We propose a new class of high-efficient deep joint source-channel coding methods that can closely adapt to the source distribution under the nonlinear transform.
Our model incorporates the nonlinear transform as a strong prior to effectively extract the source semantic features.
Notably, the proposed NTSCC method can potentially support future semantic communications due to its vigorous content-aware ability.
arXiv Detail & Related papers (2021-12-21T03:30:46Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Learning Task-Oriented Communication for Edge Inference: An Information
Bottleneck Approach [3.983055670167878]
A low-end edge device transmits the extracted feature vector of a local data sample to a powerful edge server for processing.
It is critical to encode the data into an informative and compact representation for low-latency inference given the limited bandwidth.
We propose a learning-based communication scheme that jointly optimize feature extraction, source coding, and channel coding.
arXiv Detail & Related papers (2021-02-08T12:53:32Z) - Modality Compensation Network: Cross-Modal Adaptation for Action
Recognition [77.24983234113957]
We propose a Modality Compensation Network (MCN) to explore the relationships of different modalities.
Our model bridges data from source and auxiliary modalities by a modality adaptation block to achieve adaptive representation learning.
Experimental results reveal that MCN outperforms state-of-the-art approaches on four widely-used action recognition benchmarks.
arXiv Detail & Related papers (2020-01-31T04:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.