Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The
Case of Distributed Linear Regression Problem
- URL: http://arxiv.org/abs/2101.10967v1
- Date: Tue, 26 Jan 2021 17:51:49 GMT
- Title: Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The
Case of Distributed Linear Regression Problem
- Authors: Kushal Chakrabarti, Nirupam Gupta and Nikhil Chopra
- Abstract summary: In noise-free systems, the recently proposed distributed linear regression algorithm, named the Iteratively Pre-conditioned Gradient-descent (IPG) method, has been claimed to converge faster than related methods.
We empirically show that the robustness of the IPG method compares favorably to the state-of-the-art algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the problem of multi-agent distributed linear regression
in the presence of system noises. In this problem, the system comprises
multiple agents wherein each agent locally observes a set of data points, and
the agents' goal is to compute a linear model that best fits the collective
data points observed by all the agents. We consider a server-based distributed
architecture where the agents interact with a common server to solve the
problem; however, the server cannot access the agents' data points. We consider
a practical scenario wherein the system either has observation noise, i.e., the
data points observed by the agents are corrupted, or has process noise, i.e.,
the computations performed by the server and the agents are corrupted. In
noise-free systems, the recently proposed distributed linear regression
algorithm, named the Iteratively Pre-conditioned Gradient-descent (IPG) method,
has been claimed to converge faster than related methods. In this paper, we
study the robustness of the IPG method, against both the observation noise and
the process noise. We empirically show that the robustness of the IPG method
compares favorably to the state-of-the-art algorithms.
Related papers
- A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Interacting Particle Systems on Networks: joint inference of the network
and the interaction kernel [8.535430501710712]
We infer the weight matrix of the network and systems which determine the rules of the interactions between agents.
We use two algorithms: one is on a new algorithm named operator regression with alternating least squares of data.
Both algorithms are scalable conditions guaranteeing identifiability and well-posedness.
arXiv Detail & Related papers (2024-02-13T12:29:38Z) - Iterative Sketching for Secure Coded Regression [66.53950020718021]
We propose methods for speeding up distributed linear regression.
Specifically, we randomly rotate the basis of the system of equations and then subsample blocks, to simultaneously secure the information and reduce the dimension of the regression problem.
arXiv Detail & Related papers (2023-08-08T11:10:42Z) - Federated Learning for Heterogeneous Bandits with Unobserved Contexts [0.0]
We study the problem of federated multi-arm contextual bandits with unknown contexts.
We propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions.
arXiv Detail & Related papers (2023-03-29T22:06:24Z) - Optimal Clustering with Bandit Feedback [57.672609011609886]
This paper considers the problem of online clustering with bandit feedback.
It includes a novel stopping rule for sequential testing that circumvents the need to solve any NP-hard weighted clustering problem as its subroutines.
We show through extensive simulations on synthetic and real-world datasets that BOC's performance matches the lower boundally, and significantly outperforms a non-adaptive baseline algorithm.
arXiv Detail & Related papers (2022-02-09T06:05:05Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - On Accelerating Distributed Convex Optimizations [0.0]
This paper studies a distributed multi-agent convex optimization problem.
We show that the proposed algorithm converges linearly with an improved rate of convergence than the traditional and adaptive gradient-descent methods.
We demonstrate our algorithm's superior performance compared to prominent distributed algorithms for solving real logistic regression problems.
arXiv Detail & Related papers (2021-08-19T13:19:54Z) - Distributed Algorithms for Linearly-Solvable Optimal Control in
Networked Multi-Agent Systems [15.782670973813774]
A distributed framework is proposed to partition the optimal control problem of a networked MAS into several local optimal control problems.
For discrete-time systems, the joint Bellman equation of each subsystem is transformed into a system of linear equations.
For continuous-time systems, the joint optimality equation of each subsystem is converted into a linear partial differential equation.
arXiv Detail & Related papers (2021-02-18T01:31:17Z) - Accelerating Distributed SGD for Linear Regression using Iterative
Pre-Conditioning [0.966840768820136]
Iteratively Pre-conditioned Gradient-descent (IPSG) method shown to converge faster than other existing distributed algorithms.
IPSG method's convergence rate compares favorably to prominent algorithms for solving the linear least-squares problem in server-based networks.
arXiv Detail & Related papers (2020-11-15T18:09:13Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.