Distributed Inference with Sparse and Quantized Communication
- URL: http://arxiv.org/abs/2004.01302v4
- Date: Mon, 7 Jun 2021 17:45:35 GMT
- Title: Distributed Inference with Sparse and Quantized Communication
- Authors: Aritra Mitra, John A. Richards, Saurabh Bagchi and Shreyas Sundaram
- Abstract summary: We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state.
We develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis.
We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just $1$ bit to encode its belief on each hypothesis.
- Score: 7.155594644943642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of distributed inference where agents in a network
observe a stream of private signals generated by an unknown state, and aim to
uniquely identify this state from a finite set of hypotheses. We focus on
scenarios where communication between agents is costly, and takes place over
channels with finite bandwidth. To reduce the frequency of communication, we
develop a novel event-triggered distributed learning rule that is based on the
principle of diffusing low beliefs on each false hypothesis. Building on this
principle, we design a trigger condition under which an agent broadcasts only
those components of its belief vector that have adequate innovation, to only
those neighbors that require such information. We prove that our rule
guarantees convergence to the true state exponentially fast almost surely
despite sparse communication, and that it has the potential to significantly
reduce information flow from uninformative agents to informative agents. Next,
to deal with finite-precision communication channels, we propose a distributed
learning rule that leverages the idea of adaptive quantization. We show that by
sequentially refining the range of the quantizers, every agent can learn the
truth exponentially fast almost surely, while using just $1$ bit to encode its
belief on each hypothesis. For both our proposed algorithms, we rigorously
characterize the trade-offs between communication-efficiency and the learning
rate.
Related papers
- Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation [59.01527054553122]
Decentralised agents can learn equilibria in Mean-Field Games from a single, non-episodic run of the empirical system.
We introduce function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method.
We additionally provide new algorithms that allow agents to estimate the global empirical distribution based on a local neighbourhood.
arXiv Detail & Related papers (2024-08-21T13:32:46Z) - Distributed Event-Based Learning via ADMM [11.461617927469316]
We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network.
Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents.
arXiv Detail & Related papers (2024-05-17T08:30:28Z) - Collaborative Optimization of the Age of Information under Partial
Observability [34.43476648472727]
Age of Information (AoI) is the freshness of sensor and control data at the receiver side.
We devise a decentralized AoI-minimizing transmission policy for a number of sensor agents sharing capacity-limited, non-FIFO duplex channels.
We also leverage mean-field control approximations and reinforcement learning to derive scalable and optimal solutions.
arXiv Detail & Related papers (2023-12-20T12:34:54Z) - Rate-Distortion-Perception Theory for Semantic Communication [73.04341519955223]
We study the achievable data rate of semantic communication under the symbol distortion and semantic perception constraints.
We observe that there exists cases that the receiver can directly infer the semantic information source satisfying certain distortion and perception constraints.
arXiv Detail & Related papers (2023-12-09T02:04:32Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.
We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - Semantic Communication of Learnable Concepts [16.373044313375782]
We consider the problem of communicating a sequence of concepts, i.e., unknown and potentially maps, which can be observed only through examples.
The transmitter applies a learning algorithm to the available examples, and extracts knowledge from the data.
The transmitter then needs to communicate the learned models to a remote receiver through a rate-limited channel.
arXiv Detail & Related papers (2023-05-14T11:16:17Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - Robust Asynchronous and Network-Independent Cooperative Learning [1.712689361909955]
We consider the model of cooperative learning via distributed non-Bayesian learning, where a network of agents tries to jointly agree on a hypothesis.
We show that our proposed learning dynamics guarantee that all agents in the network will have an exponential decay of their beliefs on the wrong hypothesis.
arXiv Detail & Related papers (2020-10-20T03:54:20Z) - Distributed Hypothesis Testing and Social Learning in Finite Time with a
Finite Amount of Communication [1.9199742103141069]
We consider the problem of distributed hypothesis testing (or social learning)
A network of agents seeks to identify the true state of the world from a finite set of hypotheses.
We show that if the agents know the diameter of the network, our algorithm can be further modified to allow all agents to learn the true state.
arXiv Detail & Related papers (2020-04-02T23:38:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.