Holonic Learning: A Flexible Agent-based Distributed Machine Learning
Framework
- URL: http://arxiv.org/abs/2401.10839v1
- Date: Fri, 29 Dec 2023 12:03:42 GMT
- Title: Holonic Learning: A Flexible Agent-based Distributed Machine Learning
Framework
- Authors: Ahmad Esmaeili, Zahra Ghorrati, Eric T. Matson
- Abstract summary: Holonic Learning (HoL) is a collaborative and privacy-focused learning framework designed for training deep learning models.
By leveraging holonic concepts, HoL framework establishes a structured self-similar hierarchy in the learning process.
This paper implements HoloAvg, a special variant of HoL that employs weighted averaging for model aggregation across all holons.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ever-increasing ubiquity of data and computational resources in the last
decade have propelled a notable transition in the machine learning paradigm
towards more distributed approaches. Such a transition seeks to not only tackle
the scalability and resource distribution challenges but also to address
pressing privacy and security concerns. To contribute to the ongoing discourse,
this paper introduces Holonic Learning (HoL), a collaborative and
privacy-focused learning framework designed for training deep learning models.
By leveraging holonic concepts, the HoL framework establishes a structured
self-similar hierarchy in the learning process, enabling more nuanced control
over collaborations through the individual model aggregation approach of each
holon, along with their intra-holon commitment and communication patterns. HoL,
in its general form, provides extensive design and flexibility potentials. For
empirical analysis and to demonstrate its effectiveness, this paper implements
HoloAvg, a special variant of HoL that employs weighted averaging for model
aggregation across all holons. The convergence of the proposed method is
validated through experiments on both IID and Non-IID settings of the standard
MNISt dataset. Furthermore, the performance behaviors of HoL are investigated
under various holarchical designs and data distribution scenarios. The
presented results affirm HoL's prowess in delivering competitive performance
particularly, in the context of the Non-IID data distribution.
Related papers
- FedPAE: Peer-Adaptive Ensemble Learning for Asynchronous and Model-Heterogeneous Federated Learning [9.084674176224109]
Federated learning (FL) enables multiple clients with distributed data sources to collaboratively train a shared model without compromising data privacy.
We introduce Federated Peer-Adaptive Ensemble Learning (FedPAE), a fully decentralized pFL algorithm that supports model heterogeneity and asynchronous learning.
Our approach utilizes a peer-to-peer model sharing mechanism and ensemble selection to achieve a more refined balance between local and global information.
arXiv Detail & Related papers (2024-10-17T22:47:19Z) - When Swarm Learning meets energy series data: A decentralized collaborative learning design based on blockchain [10.099134773737939]
Machine learning models offer the capability to forecast future energy production or consumption.
However, legal and policy constraints within specific energy sectors present technical hurdles in utilizing data from diverse sources.
We propose adopting a Swarm Learning scheme, which replaces the centralized server with a blockchain-based distributed network.
arXiv Detail & Related papers (2024-06-07T08:42:26Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Personalized Federated Learning with Contextual Modulation and
Meta-Learning [2.7716102039510564]
Federated learning has emerged as a promising approach for training machine learning models on decentralized data sources.
We propose a novel framework that combines federated learning with meta-learning techniques to enhance both efficiency and generalization capabilities.
arXiv Detail & Related papers (2023-12-23T08:18:22Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Efficient Cluster Selection for Personalized Federated Learning: A
Multi-Armed Bandit Approach [2.5477011559292175]
Federated learning (FL) offers a decentralized training approach for machine learning models, prioritizing data privacy.
In this paper, we introduce a dynamic Upper Confidence Bound (dUCB) algorithm inspired by the multi-armed bandit (MAB) approach.
arXiv Detail & Related papers (2023-10-29T16:46:50Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.