Distributed Online Learning with Multiple Kernels
- URL: http://arxiv.org/abs/2102.12733v2
- Date: Fri, 26 Feb 2021 07:06:13 GMT
- Title: Distributed Online Learning with Multiple Kernels
- Authors: Jeongmin Chae and Songnam Hong
- Abstract summary: We consider the problem of learning a nonlinear function over a network of learners in a fully decentralized fashion.
Online learning is additionally assumed, where every learner receives continuous streaming data locally.
We propose a novel learning framework with multiple kernels, which is named DOMKL.
- Score: 15.102346715690755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of learning a nonlinear function over a network of
learners in a fully decentralized fashion. Online learning is additionally
assumed, where every learner receives continuous streaming data locally. This
learning model is called a fully distributed online learning (or a fully
decentralized online federated learning). For this model, we propose a novel
learning framework with multiple kernels, which is named DOMKL. The proposed
DOMKL is devised by harnessing the principles of an online alternating
direction method of multipliers and a distributed Hedge algorithm. We
theoretically prove that DOMKL over T time slots can achieve an optimal
sublinear regret, implying that every learner in the network can learn a common
function which has a diminishing gap from the best function in hindsight. Our
analysis also reveals that DOMKL yields the same asymptotic performance of the
state-of-the-art centralized approach while keeping local data at edge
learners. Via numerical tests with real datasets, we demonstrate the
effectiveness of the proposed DOMKL on various online regression and
time-series prediction tasks.
Related papers
- Random Representations Outperform Online Continually Learned Representations [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
Our method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all online continual learning benchmarks.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - Locally Differentially Private Gradient Tracking for Distributed Online
Learning over Directed Graphs [2.1271873498506038]
We propose a locally differentially private gradient tracking based distributed online learning algorithm.
We prove that the proposed algorithm converges in mean square to the exact optimal solution while ensuring rigorous local differential privacy.
arXiv Detail & Related papers (2023-10-24T18:15:25Z) - Continual Learning with Deep Streaming Regularized Discriminant Analysis [0.0]
We propose a streaming version of regularized discriminant analysis as a solution to this challenge.
We combine our algorithm with a convolutional neural network and demonstrate that it outperforms both batch learning and existing streaming learning algorithms.
arXiv Detail & Related papers (2023-09-15T12:25:42Z) - Online Distributed Learning with Quantized Finite-Time Coordination [0.4910937238451484]
In our setting a set of agents need to cooperatively train a learning model from streaming data.
We propose a distributed algorithm that relies on a quantized, finite-time coordination protocol.
We analyze the performance of the proposed algorithm in terms of the mean distance from the online solution.
arXiv Detail & Related papers (2023-07-13T08:36:15Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - QC-ODKLA: Quantized and Communication-Censored Online Decentralized
Kernel Learning via Linearized ADMM [30.795725108364724]
This paper focuses on online kernel learning over a decentralized network.
We propose a novel learning framework named Online Decentralized Kernel learning via Linearized ADMM.
arXiv Detail & Related papers (2022-08-04T17:16:27Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Wide and Deep Graph Neural Network with Distributed Online Learning [174.8221510182559]
Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data.
Online learning can be leveraged to retrain GNNs at testing time to overcome this issue.
This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2021-07-19T23:56:48Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Distributed Online Learning with Multiple Kernels [10.203602318836444]
We present a privacy-preserving distributed online learning framework with multiplekernels (named DOMKL)
We theoretically prove that DOMKL over T time slots can achieve an optimal sublinear regret.
We verify the effectiveness of the proposed DOMKL on regression and time-series prediction tasks.
arXiv Detail & Related papers (2020-11-17T20:29:00Z) - Wide and Deep Graph Neural Networks with Distributed Online Learning [175.96910854433574]
Graph neural networks (GNNs) learn representations from network data with naturally distributed architectures.
Online learning can be used to retrain GNNs at testing time, overcoming this issue.
This paper proposes the Wide and Deep GNN (WD-GNN), a novel architecture that can be easily updated with distributed online learning mechanisms.
arXiv Detail & Related papers (2020-06-11T12:48:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.