Semantics Alignment via Split Learning for Resilient Multi-User Semantic
Communication
- URL: http://arxiv.org/abs/2310.09394v1
- Date: Fri, 13 Oct 2023 20:29:55 GMT
- Title: Semantics Alignment via Split Learning for Resilient Multi-User Semantic
Communication
- Authors: Jinhyuk Choi, Jihong Park, Seung-Woo Ko, Jinho Choi, Mehdi Bennis,
Seong-Lyun Kim
- Abstract summary: Recent studies on semantic communication rely on neural network (NN) based transceivers such as deep joint source and channel coding (DeepJSCC)
Unlike traditional transceivers, these neural transceivers are trainable using actual source data and channels, enabling them to extract and communicate semantics.
We propose a distributed learning based solution, which leverages split learning (SL) and partial NN fine-tuning techniques.
- Score: 56.54422521327698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies on semantic communication commonly rely on neural network (NN)
based transceivers such as deep joint source and channel coding (DeepJSCC).
Unlike traditional transceivers, these neural transceivers are trainable using
actual source data and channels, enabling them to extract and communicate
semantics. On the flip side, each neural transceiver is inherently biased
towards specific source data and channels, making different transceivers
difficult to understand intended semantics, particularly upon their initial
encounter. To align semantics over multiple neural transceivers, we propose a
distributed learning based solution, which leverages split learning (SL) and
partial NN fine-tuning techniques. In this method, referred to as SL with layer
freezing (SLF), each encoder downloads a misaligned decoder, and locally
fine-tunes a fraction of these encoder-decoder NN layers. By adjusting this
fraction, SLF controls computing and communication costs. Simulation results
confirm the effectiveness of SLF in aligning semantics under different source
data and channel dissimilarities, in terms of classification accuracy,
reconstruction errors, and recovery time for comprehending intended semantics
from misalignment.
Related papers
- Alternate Learning based Sparse Semantic Communications for Visual
Transmission [13.319988526342527]
Semantic communication (SemCom) demonstrates strong superiority over conventional bit-level accurate transmission.
In this paper, we propose an alternate learning based SemCom system for visual transmission, named SparseSBC.
arXiv Detail & Related papers (2023-07-31T03:34:16Z) - Transformer-based Joint Source Channel Coding for Textual Semantic
Communication [23.431590618978948]
Space-Air-Ground-Sea integrated network calls for more robust and secure transmission techniques against jamming.
We propose a textual semantic transmission framework for robust transmission, which utilizes the advanced natural language processing techniques to model and encode sentences.
arXiv Detail & Related papers (2023-07-23T08:42:05Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Learning Based Joint Coding-Modulation for Digital Semantic
Communication Systems [45.81474044790071]
In learning-based semantic communications, neural networks have replaced different building blocks in traditional communication systems.
The intrinsic mechanism of neural network based digital modulation is mapping continuous output of the neural network encoder into discrete constellation symbols.
We develop a joint coding-modulation scheme for digital semantic communications with BPSK modulation.
arXiv Detail & Related papers (2022-08-11T08:58:35Z) - Deep Learning-Enabled Semantic Communication Systems with Task-Unaware
Transmitter and Dynamic Data [43.308832291174106]
This paper proposes a new neural network-based semantic communication system for image transmission.
The proposed method can be adaptive to observable datasets while keeping high performance in terms of both data recovery and task execution.
arXiv Detail & Related papers (2022-04-30T13:45:50Z) - Volumetric Transformer Networks [88.85542905676712]
We introduce a learnable module, the volumetric transformer network (VTN)
VTN predicts channel-wise warping fields so as to reconfigure intermediate CNN features spatially and channel-wisely.
Our experiments show that VTN consistently boosts the features' representation power and consequently the networks' accuracy on fine-grained image recognition and instance-level image retrieval.
arXiv Detail & Related papers (2020-07-18T14:00:12Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip [41.28049430114734]
We propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
Our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
arXiv Detail & Related papers (2020-04-03T10:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.