Distributed Hypothesis Testing and Social Learning in Finite Time with a
Finite Amount of Communication
- URL: http://arxiv.org/abs/2004.01306v1
- Date: Thu, 2 Apr 2020 23:38:13 GMT
- Title: Distributed Hypothesis Testing and Social Learning in Finite Time with a
Finite Amount of Communication
- Authors: Shreyas Sundaram and Aritra Mitra
- Abstract summary: We consider the problem of distributed hypothesis testing (or social learning)
A network of agents seeks to identify the true state of the world from a finite set of hypotheses.
We show that if the agents know the diameter of the network, our algorithm can be further modified to allow all agents to learn the true state.
- Score: 1.9199742103141069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of distributed hypothesis testing (or social
learning) where a network of agents seeks to identify the true state of the
world from a finite set of hypotheses, based on a series of stochastic signals
that each agent receives. Prior work on this problem has provided distributed
algorithms that guarantee asymptotic learning of the true state, with
corresponding efforts to improve the rate of learning. In this paper, we first
argue that one can readily modify existing asymptotic learning algorithms to
enable learning in finite time, effectively yielding arbitrarily large
(asymptotic) rates. We then provide a simple algorithm for finite-time learning
which only requires the agents to exchange a binary vector (of length equal to
the number of possible hypotheses) with their neighbors at each time-step.
Finally, we show that if the agents know the diameter of the network, our
algorithm can be further modified to allow all agents to learn the true state
and stop transmitting to their neighbors after a finite number of time-steps.
Related papers
- Probabilistic Verification of Neural Networks using Branch and Bound [3.0567348883549816]
Probabilistic verification of neural networks is concerned with formally analysing the output of a neural network under a probability distribution of the inputs.
We present a new algorithm for quantifying the probabilistic verification of neural networks based on an algorithm for computing.
arXiv Detail & Related papers (2024-05-27T18:00:03Z) - Scalable Decentralized Algorithms for Online Personalized Mean Estimation [12.002609934938224]
This study focuses on a simplified version of the overarching problem, where each agent collects samples from a real-valued distribution over time to estimate its mean.
We introduce two collaborative mean estimation algorithms: one draws inspiration from belief propagation, while the other employs a consensus-based approach.
arXiv Detail & Related papers (2024-02-20T08:30:46Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Doubly Inhomogeneous Reinforcement Learning [4.334006170547247]
We propose an original algorithm to determine the best data chunks" that display similar dynamics over time and across individuals for policy learning.
Our method is general, and works with a wide range of clustering and change point detection algorithms.
arXiv Detail & Related papers (2022-11-08T03:41:14Z) - Federated Best Arm Identification with Heterogeneous Clients [62.36929749450298]
We study best arm identification in a federated multi-armed bandit setting with a central server and multiple clients.
We show that for any algorithm whose upper bound on the expected stopping time matches with the lower bound up to a multiplicative constant (em almost-optimal algorithm)
We propose a novel algorithm that communicates at exponential time instants, and demonstrate that it is almost-optimal.
arXiv Detail & Related papers (2022-10-14T13:09:11Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Distributed Inference with Sparse and Quantized Communication [7.155594644943642]
We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state.
We develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis.
We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just $1$ bit to encode its belief on each hypothesis.
arXiv Detail & Related papers (2020-04-02T23:08:51Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games [116.0771177871705]
We characterize the finite-time last-iterate convergence rate for joint OGD learning on $lambda$-cocoercive games.
We show, via a novel double-stopping time technique, that this adaptive algorithm achieves same finite-time last-iterate convergence rate as non-adaptive counterpart.
arXiv Detail & Related papers (2020-02-23T01:46:34Z) - Statistical Limits of Supervised Quantum Learning [90.0289160657379]
We show that if the bound on the accuracy is taken into account, quantum machine learning algorithms for supervised learning cannot achieve polylogarithmic runtimes in the input dimension.
We conclude that, when no further assumptions on the problem are made, quantum machine learning algorithms for supervised learning can have at most speedups over efficient classical algorithms.
arXiv Detail & Related papers (2020-01-28T17:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.