Distributed Online Learning with Multiple Kernels
- URL: http://arxiv.org/abs/2011.08930v1
- Date: Tue, 17 Nov 2020 20:29:00 GMT
- Title: Distributed Online Learning with Multiple Kernels
- Authors: Jeongmin Chae, Songnam Hong
- Abstract summary: We present a privacy-preserving distributed online learning framework with multiplekernels (named DOMKL)
We theoretically prove that DOMKL over T time slots can achieve an optimal sublinear regret.
We verify the effectiveness of the proposed DOMKL on regression and time-series prediction tasks.
- Score: 10.203602318836444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the Internet-of-Things (IoT) systems, there are plenty of informative data
provided by a massive number of IoT devices (e.g., sensors). Learning a
function from such data is of great interest in machine learning tasks for IoT
systems. Focusing on streaming (or sequential) data, we present a
privacy-preserving distributed online learning framework with multiplekernels
(named DOMKL). The proposed DOMKL is devised by leveraging the principles of an
online alternating direction of multipliers (OADMM) and a distributed Hedge
algorithm. We theoretically prove that DOMKL over T time slots can achieve an
optimal sublinear regret, implying that every learned function achieves the
performance of the best function in hindsight as in the state-of-the-art
centralized online learning method. Moreover, it is ensured that the learned
functions of any two neighboring learners have a negligible difference as T
grows, i.e., the so-called consensus constraints hold. Via experimental tests
with various real datasets, we verify the effectiveness of the proposed DOMKL
on regression and time-series prediction tasks.
Related papers
- Online Control-Informed Learning [4.907545537403502]
This paper proposes an Online Control-Informed Learning framework to solve a broad class of learning and control tasks in real time.
By considering any robot as a tunable optimal control system, we propose an online parameter estimator based on extended Kalman filter (EKF)
The proposed method also improves robustness in learning by effectively managing noise in the data.
arXiv Detail & Related papers (2024-10-04T21:03:16Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Distributed Online Learning with Multiple Kernels [15.102346715690755]
We consider the problem of learning a nonlinear function over a network of learners in a fully decentralized fashion.
Online learning is additionally assumed, where every learner receives continuous streaming data locally.
We propose a novel learning framework with multiple kernels, which is named DOMKL.
arXiv Detail & Related papers (2021-02-25T08:58:49Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning [108.9599280270704]
We propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.
RL Unplugged includes data from a diverse range of domains including games and simulated motor control problems.
We will release data for all our tasks and open-source all algorithms presented in this paper.
arXiv Detail & Related papers (2020-06-24T17:14:51Z) - Plasticity-Enhanced Domain-Wall MTJ Neural Networks for Energy-Efficient
Online Learning [9.481629586734497]
We demonstrate a multi-stage learning system realized by a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ)
We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience-inspired plasticity learning rules.
Our energy analysis confirms the value of the approach, as the learning budget stays below 20 $mu J$ even for large tasks used typically in machine learning.
arXiv Detail & Related papers (2020-03-04T22:45:59Z) - Performance Analysis and Comparison of Machine and Deep Learning
Algorithms for IoT Data Classification [0.0]
This paper evaluates the performance of 11 popular machine and deep learning algorithms for classification task using six IoT-related datasets.
Considering all performance metrics, Random Forests performed better than other machine learning models, while among deep learning models, ANN and CNN achieved more interesting results.
arXiv Detail & Related papers (2020-01-27T09:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.