Federated Hyperdimensional Computing
- URL: http://arxiv.org/abs/2312.15966v1
- Date: Tue, 26 Dec 2023 09:24:19 GMT
- Title: Federated Hyperdimensional Computing
- Authors: Kazim Ergun, Rishikanth Chandrasekaran, Tajana Rosing
- Abstract summary: Federated learning (FL) enables a loose set of participating clients to collaboratively learn a global model via coordination by a central server.
Existing FL approaches rely on complex algorithms with massive models, such as deep neural networks (DNNs)
We first propose FedHDC, a federated learning framework based on hyperdimensional computing (HDC)
- Score: 14.844383542052169
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) enables a loose set of participating clients to
collaboratively learn a global model via coordination by a central server and
with no need for data sharing. Existing FL approaches that rely on complex
algorithms with massive models, such as deep neural networks (DNNs), suffer
from computation and communication bottlenecks. In this paper, we first propose
FedHDC, a federated learning framework based on hyperdimensional computing
(HDC). FedHDC allows for fast and light-weight local training on clients,
provides robust learning, and has smaller model communication overhead compared
to learning with DNNs. However, current HDC algorithms get poor accuracy when
classifying larger & more complex images, such as CIFAR10. To address this
issue, we design FHDnn, which complements FedHDC with a self-supervised
contrastive learning feature extractor. We avoid the transmission of the DNN
and instead train only the HDC learner in a federated manner, which accelerates
learning, reduces transmission cost, and utilizes the robustness of HDC to
tackle network errors. We present a formal analysis of the algorithm and derive
its convergence rate both theoretically, and show experimentally that FHDnn
converges 3$\times$ faster vs. DNNs. The strategies we propose to improve the
communication efficiency enable our design to reduce communication costs by
66$\times$ vs. DNNs, local client compute and energy consumption by ~1.5 -
6$\times$, while being highly robust to network errors. Finally, our proposed
strategies for improving the communication efficiency have up to 32$\times$
lower communication costs with good accuracy.
Related papers
- The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning [6.9569682335746235]
Spiking Neural Networks (SNNs) have recently gained significant interest in on-chip learning in embedded devices.
In this paper, we explore the inherent robustness of SNNs under noisy communication in Federated Learning.
We propose a novel Federated Learning with TopK Sparsification algorithm to reduce the bandwidth usage for FL training.
arXiv Detail & Related papers (2024-09-19T13:37:18Z) - Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
Kernels [141.29156234353133]
State-of-the-art convex learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions.
We show this disparity can largely be attributed to challenges presented by non-NISTity.
We propose a Train-Convexify neural network (TCT) procedure to sidestep this issue.
arXiv Detail & Related papers (2022-07-13T16:58:22Z) - OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission [7.6058140480517356]
Online federated learning (OFL) is a promising framework to collaboratively learn a sequence of non-linear functions (or models) from distributed streaming data.
We propose a communication-efficient OFL algorithm (named OFedQIT) by means of a quantization and an intermittent transmission.
Our analysis reveals that OFedQIT successfully addresses the drawbacks of OFedAvg while maintaining superior learning accuracy.
arXiv Detail & Related papers (2022-05-13T07:46:43Z) - SPATL: Salient Parameter Aggregation and Transfer Learning for
Heterogeneous Clients in Federated Learning [3.5394650810262336]
Efficient federated learning is one of the key challenges for training and deploying AI models on edge devices.
Maintaining data privacy in federated learning raises several challenges including data heterogeneity, expensive communication cost, and limited resources.
We propose a salient parameter selection agent based on deep reinforcement learning on local clients, and aggregating the selected salient parameters on the central server.
arXiv Detail & Related papers (2021-11-29T06:28:05Z) - ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training [65.68511423300812]
We propose ProgFed, a progressive training framework for efficient and effective federated learning.
ProgFed inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models.
Our results show that ProgFed converges at the same rate as standard training on full models.
arXiv Detail & Related papers (2021-10-11T14:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.