QC-ODKLA: Quantized and Communication-Censored Online Decentralized
Kernel Learning via Linearized ADMM
- URL: http://arxiv.org/abs/2208.02777v1
- Date: Thu, 4 Aug 2022 17:16:27 GMT
- Title: QC-ODKLA: Quantized and Communication-Censored Online Decentralized
Kernel Learning via Linearized ADMM
- Authors: Ping Xu, Yue Wang, Xiang Chen, Zhi Tian
- Abstract summary: This paper focuses on online kernel learning over a decentralized network.
We propose a novel learning framework named Online Decentralized Kernel learning via Linearized ADMM.
- Score: 30.795725108364724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on online kernel learning over a decentralized network.
Each agent in the network receives continuous streaming data locally and works
collaboratively to learn a nonlinear prediction function that is globally
optimal in the reproducing kernel Hilbert space with respect to the total
instantaneous costs of all agents. In order to circumvent the curse of
dimensionality issue in traditional online kernel learning, we utilize random
feature (RF) mapping to convert the non-parametric kernel learning problem into
a fixed-length parametric one in the RF space. We then propose a novel learning
framework named Online Decentralized Kernel learning via Linearized ADMM
(ODKLA) to efficiently solve the online decentralized kernel learning problem.
To further improve the communication efficiency, we add the quantization and
censoring strategies in the communication stage and develop the Quantized and
Communication-censored ODKLA (QC-ODKLA) algorithm. We theoretically prove that
both ODKLA and QC-ODKLA can achieve the optimal sublinear regret
$\mathcal{O}(\sqrt{T})$ over $T$ time slots. Through numerical experiments, we
evaluate the learning effectiveness, communication, and computation
efficiencies of the proposed methods.
Related papers
- DRACO: Decentralized Asynchronous Federated Learning over Continuous Row-Stochastic Network Matrices [7.389425875982468]
We propose DRACO, a novel method for decentralized asynchronous Descent (SGD) over row-stochastic gossip wireless networks.
Our approach enables edge devices within decentralized networks to perform local training and model exchanging along a continuous timeline.
Our numerical experiments corroborate the efficacy of the proposed technique.
arXiv Detail & Related papers (2024-06-19T13:17:28Z) - Robust Decentralized Learning with Local Updates and Gradient Tracking [16.46727164965154]
We consider decentralized learning as a network of communicating clients or nodes.
We propose a decentralized minimax optimization method that employs two important data: local updates and gradient tracking.
arXiv Detail & Related papers (2024-05-02T03:03:34Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - CoDeC: Communication-Efficient Decentralized Continual Learning [6.663641564969944]
Training at the edge utilizes continuously evolving data generated at different locations.
Privacy concerns prohibit the co-location of this spatially as well as temporally distributed data.
We propose CoDeC, a novel communication-efficient decentralized continual learning algorithm.
arXiv Detail & Related papers (2023-03-27T16:52:17Z) - Online Attentive Kernel-Based Temporal Difference Learning [13.94346725929798]
Online Reinforcement Learning (RL) has been receiving increasing attention due to its fast learning capability and improving data efficiency.
Online RL often suffers from complex Value Function Approximation (VFA) and catastrophic interference.
We propose an Online Attentive Kernel-Based Temporal Difference (OAKTD) algorithm using two-timescale optimization.
arXiv Detail & Related papers (2022-01-22T14:47:10Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - A Low Complexity Decentralized Neural Net with Centralized Equivalence
using Layer-wise Learning [49.15799302636519]
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers)
In our setup, the training data is distributed among the workers but is not shared in the training process due to privacy and security concerns.
We show that it is possible to achieve equivalent learning performance as if the data is available in a single place.
arXiv Detail & Related papers (2020-09-29T13:08:12Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - COKE: Communication-Censored Decentralized Kernel Learning [30.795725108364724]
Multiple interconnected agents aim to learn an optimal decision function defined over a reproducing kernel Hilbert space by jointly minimizing a global objective function.
As a non-parametric approach, kernel iteration learning faces a major challenge in distributed implementation.
We develop a communication-censored kernel learning (COKE) algorithm that reduces the communication load of DKLA by preventing an agent from transmitting at every generalization unless its local updates are deemed informative.
arXiv Detail & Related papers (2020-01-28T01:05:57Z) - Distributed Learning in the Non-Convex World: From Batch to Streaming
Data, and Beyond [73.03743482037378]
Distributed learning has become a critical direction of the massively connected world envisioned by many.
This article discusses four key elements of scalable distributed processing and real-time data computation problems.
Practical issues and future research will also be discussed.
arXiv Detail & Related papers (2020-01-14T14:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.