Iterative Pre-Conditioning for Expediting the Gradient-Descent Method:
The Distributed Linear Least-Squares Problem
- URL: http://arxiv.org/abs/2008.02856v2
- Date: Fri, 6 Aug 2021 21:42:01 GMT
- Title: Iterative Pre-Conditioning for Expediting the Gradient-Descent Method:
The Distributed Linear Least-Squares Problem
- Authors: Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra
- Abstract summary: This paper considers the multi-agent linear least-squares problem in a server-agent network.
The goal for the agents is to compute a linear model that optimally fits the collective data points held by all the agents, without sharing their individual local data points.
We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method.
- Score: 0.966840768820136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the multi-agent linear least-squares problem in a
server-agent network. In this problem, the system comprises multiple agents,
each having a set of local data points, that are connected to a server. The
goal for the agents is to compute a linear mathematical model that optimally
fits the collective data points held by all the agents, without sharing their
individual local data points. This goal can be achieved, in principle, using
the server-agent variant of the traditional iterative gradient-descent method.
The gradient-descent method converges linearly to a solution, and its rate of
convergence is lower bounded by the conditioning of the agents' collective data
points. If the data points are ill-conditioned, the gradient-descent method may
require a large number of iterations to converge.
We propose an iterative pre-conditioning technique that mitigates the
deleterious effect of the conditioning of data points on the rate of
convergence of the gradient-descent method. We rigorously show that the
resulting pre-conditioned gradient-descent method, with the proposed iterative
pre-conditioning, achieves superlinear convergence when the least-squares
problem has a unique solution. In general, the convergence is linear with
improved rate of convergence in comparison to the traditional gradient-descent
method and the state-of-the-art accelerated gradient-descent methods. We
further illustrate the improved rate of convergence of our proposed algorithm
through experiments on different real-world least-squares problems in both
noise-free and noisy computation environment.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.