Transformer-based Joint Source Channel Coding for Textual Semantic
Communication
- URL: http://arxiv.org/abs/2307.12266v1
- Date: Sun, 23 Jul 2023 08:42:05 GMT
- Title: Transformer-based Joint Source Channel Coding for Textual Semantic
Communication
- Authors: Shicong Liu, Zhen Gao, Gaojie Chen, Yu Su, Lu Peng
- Abstract summary: Space-Air-Ground-Sea integrated network calls for more robust and secure transmission techniques against jamming.
We propose a textual semantic transmission framework for robust transmission, which utilizes the advanced natural language processing techniques to model and encode sentences.
- Score: 23.431590618978948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Space-Air-Ground-Sea integrated network calls for more robust and secure
transmission techniques against jamming. In this paper, we propose a textual
semantic transmission framework for robust transmission, which utilizes the
advanced natural language processing techniques to model and encode sentences.
Specifically, the textual sentences are firstly split into tokens using
wordpiece algorithm, and are embedded to token vectors for semantic extraction
by Transformer-based encoder. The encoded data are quantized to a fixed length
binary sequence for transmission, where binary erasure, symmetric, and deletion
channels are considered for transmission. The received binary sequences are
further decoded by the transformer decoders into tokens used for sentence
reconstruction. Our proposed approach leverages the power of neural networks
and attention mechanism to provide reliable and efficient communication of
textual data in challenging wireless environments, and simulation results on
semantic similarity and bilingual evaluation understudy prove the superiority
of the proposed model in semantic transmission.
Related papers
- Generative Semantic Communication for Text-to-Speech Synthesis [39.8799066368712]
This paper develops a novel generative semantic communication framework for text-to-speech synthesis.
We employ a transformer encoder and a diffusion model to achieve efficient semantic coding without introducing significant communication overhead.
arXiv Detail & Related papers (2024-10-04T14:18:31Z) - Latency-Aware Generative Semantic Communications with Pre-Trained Diffusion Models [43.27015039765803]
We develop a latency-aware semantic communications framework with pre-trained generative models.
We demonstrate ultra-low-rate, low-latency, and channel-adaptive semantic communications.
arXiv Detail & Related papers (2024-03-25T23:04:09Z) - Reasoning with the Theory of Mind for Pragmatic Semantic Communication [62.87895431431273]
A pragmatic semantic communication framework is proposed in this paper.
It enables effective goal-oriented information sharing between two-intelligent agents.
Numerical evaluations demonstrate the framework's ability to achieve efficient communication with a reduced amount of bits.
arXiv Detail & Related papers (2023-11-30T03:36:19Z) - Semantics Alignment via Split Learning for Resilient Multi-User Semantic
Communication [56.54422521327698]
Recent studies on semantic communication rely on neural network (NN) based transceivers such as deep joint source and channel coding (DeepJSCC)
Unlike traditional transceivers, these neural transceivers are trainable using actual source data and channels, enabling them to extract and communicate semantics.
We propose a distributed learning based solution, which leverages split learning (SL) and partial NN fine-tuning techniques.
arXiv Detail & Related papers (2023-10-13T20:29:55Z) - Alternate Learning based Sparse Semantic Communications for Visual
Transmission [13.319988526342527]
Semantic communication (SemCom) demonstrates strong superiority over conventional bit-level accurate transmission.
In this paper, we propose an alternate learning based SemCom system for visual transmission, named SparseSBC.
arXiv Detail & Related papers (2023-07-31T03:34:16Z) - Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - Semantic-Native Communication: A Simplicial Complex Perspective [50.099494681671224]
We study semantic communication from a topological space perspective.
A transmitter first maps its data into a $k$-order simplicial complex and then learns its high-order correlations.
The receiver decodes the structure and infers the missing or distorted data.
arXiv Detail & Related papers (2022-10-30T22:33:44Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - Context-Aware Transformer Transducer for Speech Recognition [21.916660252023707]
We present a novel context-aware transformer transducer (CATT) network that improves the state-of-the-art transformer-based ASR system by taking advantage of such contextual signals.
We show that CATT, using a BERT based context encoder, improves the word error rate of the baseline transformer transducer and outperforms an existing deep contextual model by 24.2% and 19.4% respectively.
arXiv Detail & Related papers (2021-11-05T04:14:35Z) - Bi-Decoder Augmented Network for Neural Machine Translation [108.3931242633331]
We propose a novel Bi-Decoder Augmented Network (BiDAN) for the neural machine translation task.
Since each decoder transforms the representations of the input text into its corresponding language, jointly training with two target ends can make the shared encoder has the potential to produce a language-independent semantic space.
arXiv Detail & Related papers (2020-01-14T02:05:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.