Learning Broadcast Protocols
- URL: http://arxiv.org/abs/2306.14284v2
- Date: Mon, 11 Dec 2023 21:26:29 GMT
- Title: Learning Broadcast Protocols
- Authors: Dana Fisman, Noa Izsak, Swen Jacobs
- Abstract summary: We look for the first time at the problem of learning a distributed system with an arbitrary number of processes.
We consider fine broadcast protocols, these are broadcast protocols with a finite cutoff and no hidden states.
We provide a learning algorithm that can infer a correct BP from a sample that is consistent with a fine BP, and a minimal equivalent BP if the sample is sufficiently complete.
- Score: 1.9336815376402723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of learning a computational model from examples has been
receiving growing attention. For the particularly challenging problem of
learning models of distributed systems, existing results are restricted to
models with a fixed number of interacting processes. In this work we look for
the first time (to the best of our knowledge) at the problem of learning a
distributed system with an arbitrary number of processes, assuming only that
there exists a cutoff, i.e., a number of processes that is sufficient to
produce all observable behaviors. Specifically, we consider fine broadcast
protocols, these are broadcast protocols (BPs) with a finite cutoff and no
hidden states. We provide a learning algorithm that can infer a correct BP from
a sample that is consistent with a fine BP, and a minimal equivalent BP if the
sample is sufficiently complete. On the negative side we show that (a)
characteristic sets of exponential size are unavoidable, (b) the consistency
problem for fine BPs is NP hard, and (c) that fine BPs are not polynomially
predictable.
Related papers
- Circular Belief Propagation for Approximate Probabilistic Inference [0.07282584715927627]
Belief Propagation (BP) is a simple probabilistic inference algorithm, consisting of passing messages between nodes of a graph representing a probability distribution.
We propose Circular Belief Propagation (CBP), an extension of BP which limits the detrimental effects of message reverberation caused by cycles by learning to detect and cancel spurious correlations and belief amplifications.
We show in numerical experiments involving binary probabilistic graphs that CBP far outperforms BP and reaches good performance compared to that of previously proposed algorithms.
arXiv Detail & Related papers (2024-03-17T15:59:39Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Deep Attentive Belief Propagation: Integrating Reasoning and Learning
for Solving Constraint Optimization Problems [24.63675651321079]
Belief Propagation (BP) is an important message-passing algorithm for various reasoning tasks over graphical models.
We propose a novel self-supervised learning algorithm for DABP with a smoothed solution cost.
Our model significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2022-09-24T13:03:46Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Predictive Coding Can Do Exact Backpropagation on Convolutional and
Recurrent Neural Networks [40.51949948934705]
Predictive coding networks (PCNs) are an influential model for information processing in the brain.
BP is commonly regarded to be the most successful learning method in modern machine learning.
We show that a biologically plausible algorithm is able to exactly replicate the accuracy of BP on complex architectures.
arXiv Detail & Related papers (2021-03-05T14:57:01Z) - Belief Propagation Neural Networks [103.97004780313105]
We introduce belief propagation neural networks (BPNNs)
BPNNs operate on factor graphs and generalize Belief propagation (BP)
We show that BPNNs converges 1.7x faster on Ising models while providing tighter bounds.
On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods.
arXiv Detail & Related papers (2020-07-01T07:39:51Z) - $\alpha$ Belief Propagation for Approximate Inference [16.258304175469917]
Belief propagation (BP) algorithm is a widely used message-passing method for inference in graphical models.
We derive an interpretable belief propagation algorithm that is motivated by minimization of a localized $alpha$-divergence.
It turns out that $alpha$-BP generalizes standard BP.
arXiv Detail & Related papers (2020-06-27T13:32:06Z) - Sample-Efficient Reinforcement Learning of Undercomplete POMDPs [91.40308354344505]
This work shows that these hardness barriers do not preclude efficient reinforcement learning for rich and interesting subclasses of Partially Observable Decision Processes (POMDPs)
We present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works.
arXiv Detail & Related papers (2020-06-22T17:58:54Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.