A Resilient Distributed Boosting Algorithm
- URL: http://arxiv.org/abs/2206.04713v2
- Date: Mon, 13 Jun 2022 10:18:18 GMT
- Title: A Resilient Distributed Boosting Algorithm
- Authors: Yuval Filmus, Idan Mehalel and Shay Moran
- Abstract summary: We present a distributed boosting algorithm which is resilient to a limited amount of noise.
Our algorithm is similar to boosting algorithms, although it is equipped with a new component, inspired by Impagliazzo's hard-core lemma [Impagliazzo95], adding a robustness to the algorithm.
- Score: 15.658216451225153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a learning task where the data is distributed among several parties,
communication is one of the fundamental resources which the parties would like
to minimize. We present a distributed boosting algorithm which is resilient to
a limited amount of noise. Our algorithm is similar to classical boosting
algorithms, although it is equipped with a new component, inspired by
Impagliazzo's hard-core lemma [Impagliazzo95], adding a robustness quality to
the algorithm. We also complement this result by showing that resilience to any
asymptotically larger noise is not achievable by a communication-efficient
algorithm.
Related papers
- Resilience-Runtime Tradeoff Relations for Quantum Algorithms [0.7371081631199642]
A leading approach to algorithm design aims to minimize the number of operations in an algorithm's compilation.
We develop a framework to characterize the resilience of an algorithm to perturbative noises.
We show how this framework can be leveraged to identify compilations of an algorithm that are better suited to withstand certain noises.
arXiv Detail & Related papers (2024-08-05T18:31:14Z) - Nonparametric Evaluation of Noisy ICA Solutions [5.749787074942513]
Independent Component Analysis (ICA) was introduced in the 1980's as a model for Blind Source Separation (BSS)
We develop a nonparametric score to adaptively pick the right algorithm for ICA with arbitrary Gaussian noise.
arXiv Detail & Related papers (2024-01-16T16:18:17Z) - Federated Learning with a Sampling Algorithm under Isoperimetry [9.990687944474738]
Federated learning uses a set of techniques to efficiently distribute the training of a machine learning algorithm across several devices.
We propose a communication-efficient variant of Langevinvin's sample a posteriori.
arXiv Detail & Related papers (2022-06-02T08:19:03Z) - Adversarial Robustness of Streaming Algorithms through Importance
Sampling [29.957317489789745]
We introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks.
Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness.
We empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust.
arXiv Detail & Related papers (2021-06-28T19:24:11Z) - Community detection using fast low-cardinality semidefinite programming [94.4878715085334]
We propose a new low-cardinality algorithm that generalizes the local update to maximize a semidefinite relaxation derived from Leiden-k-cut.
This proposed algorithm is scalable, outperforms state-of-the-art algorithms, and outperforms in real-world time with little additional cost.
arXiv Detail & Related papers (2020-12-04T15:46:30Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Qubit-efficient entanglement spectroscopy using qubit resets [0.0]
We develop qubit-efficient quantum algorithms for entanglement spectroscopy on NISQ devices.
Our algorithms use fewer qubits than any previous efficient algorithm while achieving similar performance in the presence of noise.
We also introduce the notion of effective circuit depth as a generalization of standard circuit depth suitable for circuits with qubit resets.
arXiv Detail & Related papers (2020-10-06T23:22:57Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Differentially Private Clustering: Tight Approximation Ratios [57.89473217052714]
We give efficient differentially private algorithms for basic clustering problems.
Our results imply an improved algorithm for the Sample and Aggregate privacy framework.
One of the tools used in our 1-Cluster algorithm can be employed to get a faster quantum algorithm for ClosestPair in a moderate number of dimensions.
arXiv Detail & Related papers (2020-08-18T16:22:06Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.