Communication-Efficient Distributed Cooperative Learning with Compressed
Beliefs
- URL: http://arxiv.org/abs/2102.07767v1
- Date: Sun, 14 Feb 2021 06:19:36 GMT
- Title: Communication-Efficient Distributed Cooperative Learning with Compressed
Beliefs
- Authors: Mohammad Taha Toghani, Cesar A. Uribe
- Abstract summary: We study the problem of distributed cooperative learning, where a group of agents seek to agree on a set of hypotheses.
In the scenario where the set of hypotheses is large, we propose a belief update rule where agents share compressed (either sparse or quantized) beliefs with an arbitrary positive compression rate.
- Score: 0.8702432681310401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of distributed cooperative learning, where a group of
agents seek to agree on a set of hypotheses that best describes a sequence of
private observations. In the scenario where the set of hypotheses is large, we
propose a belief update rule where agents share compressed (either sparse or
quantized) beliefs with an arbitrary positive compression rate. Our algorithm
leverages a unified and straightforward communication rule that enables agents
to access wide-ranging compression operators as black-box modules. We prove the
almost sure asymptotic exponential convergence of beliefs around the set of
optimal hypotheses. Additionally, we show a non-asymptotic, explicit, and
linear concentration rate in probability of the beliefs on the optimal
hypothesis set. We provide numerical experiments to illustrate the
communication benefits of our method. The simulation results show that the
number of transmitted bits can be reduced to 5-10% of the non-compressed method
in the studied scenarios.
Related papers
- Sequential Manipulation Against Rank Aggregation: Theory and Algorithm [119.57122943187086]
We leverage an online attack on the vulnerable data collection process.
From the game-theoretic perspective, the confrontation scenario is formulated as a distributionally robust game.
The proposed method manipulates the results of rank aggregation methods in a sequential manner.
arXiv Detail & Related papers (2024-07-02T03:31:21Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Intervention Generalization: A View from Factor Graph Models [7.117681268784223]
We take a close look at how to warrant a leap from past experiments to novel conditions based on minimal assumptions about the factorization of the distribution of the manipulated system.
A postulated $textitinterventional factor model$ (IFM) may not always be informative, but it conveniently abstracts away a need for explicitly modeling unmeasured confounding and feedback mechanisms.
arXiv Detail & Related papers (2023-06-06T21:44:23Z) - Data Association Aware POMDP Planning with Hypothesis Pruning
Performance Guarantees [7.928094304325113]
We introduce a pruning-based approach for planning with ambiguous data associations.
Our key contribution is to derive bounds between the value function based on the complete set of hypotheses and the value function based on a pruned-subset of the hypotheses.
We demonstrate how these bounds can both be used to certify any pruning in retrospect and propose a novel approach to determine which hypotheses to prune in order to ensure a predefined limit on the loss.
arXiv Detail & Related papers (2023-03-03T18:35:01Z) - Communication-Efficient Distributed Estimation and Inference for Cox's Model [4.731404257629232]
We develop communication-efficient iterative distributed algorithms for estimation and inference in the high-dimensional sparse Cox proportional hazards model.
To construct confidence intervals for linear combinations of high-dimensional hazard regression coefficients, we introduce a novel debiased method.
We provide valid and powerful distributed hypothesis tests for any coordinate element based on a decorrelated score test.
arXiv Detail & Related papers (2023-02-23T15:50:17Z) - Compression, Generalization and Learning [3.045851438458641]
A compression function is a map that slims down an observational set into a subset of reduced size.
In multiple applications, the condition that one new observation makes the compressed set change is interpreted that this observation brings in extra information.
In this paper, we lay the foundations of a new theory that allows one to keep control on the probability of change of compression.
arXiv Detail & Related papers (2023-01-30T10:27:45Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Distributed Inference with Sparse and Quantized Communication [7.155594644943642]
We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state.
We develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis.
We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just $1$ bit to encode its belief on each hypothesis.
arXiv Detail & Related papers (2020-04-02T23:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.