Gossiped and Quantized Online Multi-Kernel Learning
- URL: http://arxiv.org/abs/2301.09848v2
- Date: Fri, 28 Apr 2023 18:38:06 GMT
- Title: Gossiped and Quantized Online Multi-Kernel Learning
- Authors: Tomas Ortega and Hamid Jafarkhani
- Abstract summary: We show that distributed and online multi- kernel learning provides sub-linear regret as long as every pair of nodes in the network can communicate.
This letter expands on these results to non-fully connected graphs, which is often the case in wireless sensor networks.
We propose a gossip algorithm and provide a proof that it achieves sub-linear regret.
- Score: 39.057968279167966
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In instances of online kernel learning where little prior information is
available and centralized learning is unfeasible, past research has shown that
distributed and online multi-kernel learning provides sub-linear regret as long
as every pair of nodes in the network can communicate (i.e., the communications
network is a complete graph). In addition, to manage the communication load,
which is often a performance bottleneck, communications between nodes can be
quantized. This letter expands on these results to non-fully connected graphs,
which is often the case in wireless sensor networks. To address this challenge,
we propose a gossip algorithm and provide a proof that it achieves sub-linear
regret. Experiments with real datasets confirm our findings.
Related papers
- Communication Efficient Distributed Learning for Kernelized Contextual
Bandits [58.78878127799718]
We tackle the communication efficiency challenge of learning kernelized contextual bandits in a distributed setting.
We consider non-linear reward mappings, by letting agents collaboratively search in a reproducing kernel Hilbert space.
We rigorously proved that our algorithm can attain sub-linear rate in both regret and communication cost.
arXiv Detail & Related papers (2022-06-10T01:39:15Z) - Distributed Learning for Time-varying Networks: A Scalable Design [13.657740129012804]
We propose a distributed learning framework based on a scalable deep neural network (DNN) design.
By exploiting the permutation equivalence and invariance properties of the learning tasks, the DNNs with different scales for different clients can be built up.
Model aggregation can also be conducted based on these two sub-matrices to improve the learning convergence and performance.
arXiv Detail & Related papers (2021-07-31T12:44:28Z) - Graph-based Deep Learning for Communication Networks: A Survey [1.1977931648859175]
This paper is the first survey that focuses on the application of graph-based deep learning methods in communication networks.
To track the follow-up research, a public GitHub repository is created, where the relevant papers will be updated continuously.
arXiv Detail & Related papers (2021-06-04T14:59:10Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Computational Separation Between Convolutional and Fully-Connected
Networks [35.39956227364153]
We show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks.
Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent.
arXiv Detail & Related papers (2020-10-03T14:24:59Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Distributed Learning in the Non-Convex World: From Batch to Streaming
Data, and Beyond [73.03743482037378]
Distributed learning has become a critical direction of the massively connected world envisioned by many.
This article discusses four key elements of scalable distributed processing and real-time data computation problems.
Practical issues and future research will also be discussed.
arXiv Detail & Related papers (2020-01-14T14:11:32Z) - Understanding the Limitations of Network Online Learning [5.925292989496618]
We investigate limitations of learning to complete partially observed networks via node querying.
We call this querying process Network Online Learning and present a family of algorithms called NOL*.
arXiv Detail & Related papers (2020-01-09T13:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.