Transformers for Green Semantic Communication: Less Energy, More
Semantics
- URL: http://arxiv.org/abs/2310.07592v2
- Date: Mon, 19 Feb 2024 01:52:25 GMT
- Title: Transformers for Green Semantic Communication: Less Energy, More
Semantics
- Authors: Shubhabrata Mukherjee, Cory Beard, and Sejun Song (School of Science
and Engineering, University of Missouri-Kansas City, Kansas City, MO, USA)
- Abstract summary: "Energy-d Semantic Loss" function addresses the challenge of balancing semantic information loss and energy consumption.
It can save up to 90% of energy while achieving a 44% improvement in semantic similarity performance during inference.
This work paves the way for energy-efficient neural network selection and the development of greener semantic communication architectures.
- Score: 0.3226483876828104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic communication aims to transmit meaningful and effective information,
rather than focusing on individual symbols or bits. This results in benefits
like reduced latency, bandwidth usage, and higher throughput compared with
traditional communication. However, semantic communication poses significant
challenges due to the need for universal metrics to benchmark the joint effects
of semantic information loss and practical energy consumption. This research
presents a novel multi-objective loss function named "Energy-Optimized Semantic
Loss" (EOSL), addressing the challenge of balancing semantic information loss
and energy consumption. Through comprehensive experiments on transformer
models, including CPU and GPU energy usage, it is demonstrated that EOSL-based
encoder model selection can save up to 90% of energy while achieving a 44%
improvement in semantic similarity performance during inference in this
experiment. This work paves the way for energy-efficient neural network
selection and the development of greener semantic communication architectures.
Related papers
- MetaGreen: Meta-Learning Inspired Transformer Selection for Green Semantic Communication [0.3850666668546735]
Energy-d Semantic Loss'' function balances semantic information loss and energy consumption.
We show that EOSL-based transformer model selection up to 83% better similarity-to-power ratio (SPR) compared to BLEU score-based selection.
We extend the applicability of EOSL to diverse and varying contexts, inspired by the principles of Meta-Learning.
arXiv Detail & Related papers (2024-06-22T00:49:40Z) - Agent-driven Generative Semantic Communication with Cross-Modality and Prediction [57.335922373309074]
We propose a novel agent-driven generative semantic communication framework based on reinforcement learning.
In this work, we develop an agent-assisted semantic encoder with cross-modality capability, which can track the semantic changes, channel condition, to perform adaptive semantic extraction and sampling.
The effectiveness of the designed models has been verified using the UA-DETRAC dataset, demonstrating the performance gains of the overall A-GSC framework.
arXiv Detail & Related papers (2024-04-10T13:24:27Z) - Dynamic Relative Representations for Goal-Oriented Semantic Communications [13.994922919058922]
semantic and effectiveness aspects of communications will play a fundamental role in 6G wireless networks.
In latent space communication, this challenge manifests as misalignment within high-dimensional representations where deep neural networks encode data.
This paper presents a novel framework for goal-oriented semantic communication, leveraging relative representations to mitigate semantic mismatches.
arXiv Detail & Related papers (2024-03-25T17:48:06Z) - An Energy-Efficient Ensemble Approach for Mitigating Data Incompleteness in IoT Applications [0.0]
It is important to build IoT-based Machine Learning systems that are robust against data incompleteness while simultaneously being energy efficient.
ENAMLE is a proactive, energy-aware technique for mitigating the impact of concurrent missing data.
We present extensive experimental studies on two distinct datasets that demonstrate the energy efficiency of ENAMLE.
arXiv Detail & Related papers (2024-03-15T15:01:48Z) - Measuring the Energy Consumption and Efficiency of Deep Neural Networks:
An Empirical Analysis and Design Recommendations [0.49478969093606673]
BUTTER-E dataset is an augmentation to the BUTTER Empirical Deep Learning dataset.
This dataset reveals the complex relationship between dataset size, network structure, and energy use.
We propose a straightforward and effective energy model that accounts for network size, computing, and memory hierarchy.
arXiv Detail & Related papers (2024-03-13T00:27:19Z) - Reasoning with the Theory of Mind for Pragmatic Semantic Communication [62.87895431431273]
A pragmatic semantic communication framework is proposed in this paper.
It enables effective goal-oriented information sharing between two-intelligent agents.
Numerical evaluations demonstrate the framework's ability to achieve efficient communication with a reduced amount of bits.
arXiv Detail & Related papers (2023-11-30T03:36:19Z) - Communication-Efficient Framework for Distributed Image Semantic
Wireless Transmission [68.69108124451263]
Federated learning-based semantic communication (FLSC) framework for multi-task distributed image transmission with IoT devices.
Each link is composed of a hierarchical vision transformer (HVT)-based extractor and a task-adaptive translator.
Channel state information-based multiple-input multiple-output transmission module designed to combat channel fading and noise.
arXiv Detail & Related papers (2023-08-07T16:32:14Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - Resource-Constrained On-Device Learning by Dynamic Averaging [7.720999661966942]
Communication between data-generating devices is partially responsible for a growing portion of the world's power consumption.
For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially.
This paper investigates an approach to communication-efficient on-device learning of integer exponential families executed on low-power processors.
arXiv Detail & Related papers (2020-09-25T09:29:10Z) - Communication Efficient Federated Learning with Energy Awareness over
Wireless Networks [51.645564534597625]
In federated learning (FL), the parameter server and the mobile devices share the training parameters over wireless links.
We adopt the idea of SignSGD in which only the signs of the gradients are exchanged.
Two optimization problems are formulated and solved, which optimize the learning performance.
Considering that the data may be distributed across the mobile devices in a highly uneven fashion in FL, a sign-based algorithm is proposed.
arXiv Detail & Related papers (2020-04-15T21:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.