DeEPCA: Decentralized Exact PCA with Linear Convergence Rate
- URL: http://arxiv.org/abs/2102.03990v1
- Date: Mon, 8 Feb 2021 03:53:09 GMT
- Title: DeEPCA: Decentralized Exact PCA with Linear Convergence Rate
- Authors: Haishan Ye, Tong Zhang
- Abstract summary: textttDe EPCA is the first decentralized PCA algorithm with the number of communication rounds for each power independent of target precision.
Compared to existing algorithms, the proposed method is easier to tune in practice, with an improved overall communication cost.
- Score: 26.819982387028595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the rapid growth of smart agents such as weakly connected
computational nodes and sensors, developing decentralized algorithms that can
perform computations on local agents becomes a major research direction. This
paper considers the problem of decentralized Principal components analysis
(PCA), which is a statistical method widely used for data analysis. We
introduce a technique called subspace tracking to reduce the communication
cost, and apply it to power iterations. This leads to a decentralized PCA
algorithm called \texttt{DeEPCA}, which has a convergence rate similar to that
of the centralized PCA, while achieving the best communication complexity among
existing decentralized PCA algorithms. \texttt{DeEPCA} is the first
decentralized PCA algorithm with the number of communication rounds for each
power iteration independent of target precision. Compared to existing
algorithms, the proposed method is easier to tune in practice, with an improved
overall communication cost. Our experiments validate the advantages of
\texttt{DeEPCA} empirically.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Accelerating Wireless Federated Learning via Nesterov's Momentum and
Distributed Principle Component Analysis [59.127630388320036]
A wireless federated learning system is investigated by allowing a server and workers to exchange uncoded information via wireless channels.
Since the workers frequently upload local to the server via bandwidth-limited channels, the uplink transmission from the workers to the server becomes a communication bottleneck.
A one-shot distributed principle component analysis (PCA) is leveraged to reduce the dimension of the dimension of the communication bottleneck.
arXiv Detail & Related papers (2023-03-31T08:41:42Z) - An online algorithm for contrastive Principal Component Analysis [9.090031210111919]
We derive an online algorithm for cPCA* and show that it maps onto a neural network with local learning rules, so it can potentially be implemented in energy efficient neuromorphic hardware.
We evaluate the performance of our online algorithm on real datasets and highlight the differences and similarities with the original formulation.
arXiv Detail & Related papers (2022-11-14T19:48:48Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - Sample and Communication-Efficient Decentralized Actor-Critic Algorithms
with Finite-Time Analysis [27.21581944906418]
Actor-critic (AC) algorithms have been widely adopted in decentralized multi-agent systems.
We develop two decentralized AC and natural AC (NAC) algorithms that are private, and sample and communication-efficient.
arXiv Detail & Related papers (2021-09-08T15:02:21Z) - FAST-PCA: A Fast and Exact Algorithm for Distributed Principal Component
Analysis [12.91948651812873]
Principal Component Analysis (PCA) is a fundamental data preprocessing tool in the world of machine learning.
This paper proposes a distributed PCA algorithm called FAST-PCA (Fast and exAct diSTributed PCA)
arXiv Detail & Related papers (2021-08-27T16:10:59Z) - Turning Channel Noise into an Accelerator for Over-the-Air Principal
Component Analysis [65.31074639627226]
Principal component analysis (PCA) is a technique for extracting the linear structure of a dataset.
We propose the deployment of PCA over a multi-access channel based on the algorithm of gradient descent.
Over-the-air aggregation is adopted to reduce the multi-access latency, giving the name over-the-air PCA.
arXiv Detail & Related papers (2021-04-20T16:28:33Z) - A Linearly Convergent Algorithm for Decentralized Optimization: Sending
Less Bits for Free! [72.31332210635524]
Decentralized optimization methods enable on-device training of machine learning models without a central coordinator.
We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators.
We prove that our method can solve the problems without any increase in the number of communications compared to the baseline.
arXiv Detail & Related papers (2020-11-03T13:35:53Z) - Decentralized Deep Learning using Momentum-Accelerated Consensus [15.333413663982874]
We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset.
We propose and analyze a novel decentralized deep learning algorithm where the agents interact over a fixed communication topology.
Our algorithm is based on the heavy-ball acceleration method used in gradient-based protocol.
arXiv Detail & Related papers (2020-10-21T17:39:52Z) - Quantized Decentralized Stochastic Learning over Directed Graphs [52.94011236627326]
We consider a decentralized learning problem where data points are distributed among computing nodes communicating over a directed graph.
As the model size gets large, decentralized learning faces a major bottleneck that is the communication load due to each node transmitting messages (model updates) to its neighbors.
We propose the quantized decentralized learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization.
arXiv Detail & Related papers (2020-02-23T18:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.