Combining Federated and Active Learning for Communication-efficient
Distributed Failure Prediction in Aeronautics
- URL: http://arxiv.org/abs/2001.07504v1
- Date: Tue, 21 Jan 2020 13:17:00 GMT
- Title: Combining Federated and Active Learning for Communication-efficient
Distributed Failure Prediction in Aeronautics
- Authors: Nicolas Aussel (INF, ACMES-SAMOVAR, IP Paris), Sophie Chabridon (IP
Paris, INF, ACMES-SAMOVAR), Yohan Petetin (TIPIC-SAMOVAR, CITI, IP Paris)
- Abstract summary: We present a new centralized distributed learning algorithm that relies on the learning paradigms of Active Learning and Federated Learning.
We evaluate this method on a public benchmark and show that its performances in terms of precision are very close to state-of-the-art performance level of non-distributed learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning has proven useful in the recent years as a way to achieve
failure prediction for industrial systems. However, the high computational
resources necessary to run learning algorithms are an obstacle to its
widespread application. The sub-field of Distributed Learning offers a solution
to this problem by enabling the use of remote resources but at the expense of
introducing communication costs in the application that are not always
acceptable. In this paper, we propose a distributed learning approach able to
optimize the use of computational and communication resources to achieve
excellent learning model performances through a centralized architecture. To
achieve this, we present a new centralized distributed learning algorithm that
relies on the learning paradigms of Active Learning and Federated Learning to
offer a communication-efficient method that offers guarantees of model
precision on both the clients and the central server. We evaluate this method
on a public benchmark and show that its performances in terms of precision are
very close to state-of-the-art performance level of non-distributed learning
despite additional constraints.
Related papers
- Local Methods with Adaptivity via Scaling [38.99428012275441]
This paper aims to merge the local training technique with the adaptive approach to develop efficient distributed learning methods.
We consider the classical Local SGD method and enhance it with a scaling feature.
In addition to theoretical analysis, we validate the performance of our methods in practice by training a neural network.
arXiv Detail & Related papers (2024-06-02T19:50:05Z) - Semi-Supervised Learning Approach for Efficient Resource Allocation with Network Slicing in O-RAN [5.1435595246496595]
This paper introduces an innovative approach to the resource allocation problem.
It aims to coordinate multiple independent x-applications (xAPPs) for network slicing and resource allocation in the Open Radio Access Network (O-RAN)
arXiv Detail & Related papers (2024-01-16T22:23:27Z) - Fast Context Adaptation in Cost-Aware Continual Learning [10.515324071327903]
5G and Beyond networks require more complex learning agents and the learning process itself might end up competing with users for communication and computational resources.
This creates friction: on the one hand, the learning process needs resources to quickly convergence to an effective strategy; on the other hand, the learning process needs to be efficient, i.e. take as few resources as possible from the user's data plane, so as not to throttle users' resources.
In this paper, we propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning.
arXiv Detail & Related papers (2023-06-06T17:46:48Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Adaptive Serverless Learning [114.36410688552579]
We propose a novel adaptive decentralized training approach, which can compute the learning rate from data dynamically.
Our theoretical results reveal that the proposed algorithm can achieve linear speedup with respect to the number of workers.
To reduce the communication-efficient overhead, we further propose a communication-efficient adaptive decentralized training approach.
arXiv Detail & Related papers (2020-08-24T13:23:02Z) - Incentive Mechanism Design for Resource Sharing in Collaborative Edge
Learning [106.51930957941433]
In 5G and Beyond networks, Artificial Intelligence applications are expected to be increasingly ubiquitous.
This necessitates a paradigm shift from the current cloud-centric model training approach to the Edge Computing based collaborative learning scheme known as edge learning.
arXiv Detail & Related papers (2020-05-31T12:45:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.