Communication-efficient Byzantine-robust distributed learning with
statistical guarantee
- URL: http://arxiv.org/abs/2103.00373v1
- Date: Sun, 28 Feb 2021 01:38:37 GMT
- Title: Communication-efficient Byzantine-robust distributed learning with
statistical guarantee
- Authors: Xingcai Zhou, Le Chang, Pengfei Xu and Shaogao Lv
- Abstract summary: Communication efficiency and robustness are two major issues in modern distributed learning framework.
This paper develops two communication-efficient and robust distributed learning algorithms for convex problems.
- Score: 2.8407238889624877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication efficiency and robustness are two major issues in modern
distributed learning framework. This is due to the practical situations where
some computing nodes may have limited communication power or may behave
adversarial behaviors. To address the two issues simultaneously, this paper
develops two communication-efficient and robust distributed learning algorithms
for convex problems. Our motivation is based on surrogate likelihood framework
and the median and trimmed mean operations. Particularly, the proposed
algorithms are provably robust against Byzantine failures, and also achieve
optimal statistical rates for strong convex losses and convex (non-smooth)
penalties. For typical statistical models such as generalized linear models,
our results show that statistical errors dominate optimization errors in finite
iterations. Simulated and real data experiments are conducted to demonstrate
the numerical performance of our algorithms.
Related papers
- Probabilistic Iterative Hard Thresholding for Sparse Learning [2.5782973781085383]
We present an approach towards solving expectation objective optimization problems with cardinality constraints.
We prove convergence of the underlying process, and demonstrate the performance on two Machine Learning problems.
arXiv Detail & Related papers (2024-09-02T18:14:45Z) - Asymptotically Efficient Online Learning for Censored Regression Models
Under Non-I.I.D Data [2.2446129622980227]
An efficient online learning problem is investigated for censored regression models.
A numerical example is provided to illustrate the superiority of the proposed online algorithm over the existing related ones in the literature.
arXiv Detail & Related papers (2023-09-18T03:28:48Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Byzantine-Resilient Federated Learning at Edge [20.742023657098525]
We present a Byzantine-resilient descent algorithm that can handle heavy-tailed data.
We also propose an algorithm that incorporates costs during the learning process.
arXiv Detail & Related papers (2023-03-18T15:14:16Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Statistically Guided Divide-and-Conquer for Sparse Factorization of
Large Matrix [2.345015036605934]
We formulate the statistical problem as a sparse factor regression and tackle it with a divide-conquer approach.
In the first stage division, we consider both latent parallel approaches for simplifying the task into a set of co-parsesparserank estimation (CURE) problems.
In the second stage division, we innovate a stagewise learning technique, consisting of a sequence simple incremental paths, to efficiently trace out the whole solution of CURE.
arXiv Detail & Related papers (2020-03-17T19:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.