Learning to Communicate with Intent: An Introduction
- URL: http://arxiv.org/abs/2211.09613v2
- Date: Fri, 18 Nov 2022 11:33:05 GMT
- Title: Learning to Communicate with Intent: An Introduction
- Authors: Miguel Angel Gutierrez-Estevez, Yiqun Wu, Chan Zhou
- Abstract summary: We propose a novel framework to learn how to transmit messages over a wireless communication channel based on the end-goal of the communication.
This stays in stark contrast to classical communication systems where the objective is to reproduce at the receiver side either exactly or approximately the message sent by the transmitter.
- Score: 2.007345596217044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel framework to learn how to communicate with intent, i.e.,
to transmit messages over a wireless communication channel based on the
end-goal of the communication. This stays in stark contrast to classical
communication systems where the objective is to reproduce at the receiver side
either exactly or approximately the message sent by the transmitter, regardless
of the end-goal. Our procedure is general enough that can be adapted to any
type of goal or task, so long as the said task is a (almost-everywhere)
differentiable function over which gradients can be propagated. We focus on
supervised learning and reinforcement learning (RL) tasks, and propose
algorithms to learn the communication system and the task jointly in an
end-to-end manner. We then delve deeper into the transmission of images and
propose two systems, one for the classification of images and a second one to
play an Atari game based on RL. The performance is compared with a joint source
and channel coding (JSCC) communication system designed to minimize the
reconstruction error, and results show overall great improvement. Further, for
the RL task, we show that while a JSCC strategy is not better than a random
action selection strategy, with our approach we get close to the upper bound
even for low SNRs.
Related papers
- Context-aware Communication for Multi-agent Reinforcement Learning [6.109127175562235]
We develop a context-aware communication scheme for multi-agent reinforcement learning (MARL)
In the first stage, agents exchange coarse representations in a broadcast fashion, providing context for the second stage.
Following this, agents utilize attention mechanisms in the second stage to selectively generate messages personalized for the receivers.
To evaluate the effectiveness of CACOM, we integrate it with both actor-critic and value-based MARL algorithms.
arXiv Detail & Related papers (2023-12-25T03:33:08Z) - Batch Selection and Communication for Active Learning with Edge Labeling [54.64985724916654]
Communication-Constrained Bayesian Active Knowledge Distillation (CC-BAKD)
This work introduces Communication-Constrained Bayesian Active Knowledge Distillation (CC-BAKD)
arXiv Detail & Related papers (2023-11-14T10:23:00Z) - Generative AI-aided Joint Training-free Secure Semantic Communications
via Multi-modal Prompts [89.04751776308656]
This paper proposes a GAI-aided SemCom system with multi-model prompts for accurate content decoding.
In response to security concerns, we introduce the application of covert communications aided by a friendly jammer.
arXiv Detail & Related papers (2023-09-05T23:24:56Z) - Multi-Receiver Task-Oriented Communications via Multi-Task Deep Learning [49.83882366499547]
This paper studies task-oriented, otherwise known as goal-oriented, communications in a setting where a transmitter communicates with multiple receivers.
A multi-task deep learning approach is presented for joint optimization of completing multiple tasks and communicating with multiple receivers.
arXiv Detail & Related papers (2023-08-14T01:34:34Z) - Model-free Reinforcement Learning of Semantic Communication by Stochastic Policy Gradient [9.6403215177092]
The idea of semantic communication by Weaver from 1949 has gained attention.
We apply the Policy Gradient (SPG) to design a semantic communication system.
We derive the use of both classic and semantic communication from the mutual information between received and target variables.
arXiv Detail & Related papers (2023-05-05T14:27:58Z) - Curriculum Learning for Goal-Oriented Semantic Communications with a
Common Language [60.85719227557608]
A holistic goal-oriented semantic communication framework is proposed to enable a speaker and a listener to cooperatively execute a set of sequential tasks.
A common language based on a hierarchical belief set is proposed to enable semantic communications between speaker and listener.
An optimization problem is defined to determine the perfect and abstract description of the events.
arXiv Detail & Related papers (2022-04-21T22:36:06Z) - FCMNet: Full Communication Memory Net for Team-Level Cooperation in
Multi-Agent Systems [15.631744703803806]
We introduce FCMNet, a reinforcement learning based approach that allows agents to simultaneously learn an effective multi-hop communications protocol.
Using a simple multi-hop topology, we endow each agent with the ability to receive information sequentially encoded by every other agent at each time step.
FCMNet outperforms state-of-the-art communication-based reinforcement learning methods in all StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2022-01-28T09:12:01Z) - Common Language for Goal-Oriented Semantic Communications: A Curriculum
Learning Framework [66.81698651016444]
A comprehensive semantic communications framework is proposed for enabling goal-oriented task execution.
A novel top-down framework that combines curriculum learning (CL) and reinforcement learning (RL) is proposed to solve this problem.
Simulation results show that the proposed CL method outperforms traditional RL in terms of convergence time, task execution time, and transmission cost during training.
arXiv Detail & Related papers (2021-11-15T19:13:55Z) - Minimizing Communication while Maximizing Performance in Multi-Agent
Reinforcement Learning [5.612141846711729]
Inter-agent communication can significantly increase performance in multi-agent tasks that require co-ordination.
In real-world applications, where communication may be limited by system constraints like bandwidth, power and network capacity, one might need to reduce the number of messages that are sent.
We show that we can reduce communication by 75% with no loss of performance.
arXiv Detail & Related papers (2021-06-15T23:13:51Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.