IBM Federated Learning: an Enterprise Framework White Paper V0.1
- URL: http://arxiv.org/abs/2007.10987v1
- Date: Wed, 22 Jul 2020 05:32:00 GMT
- Title: IBM Federated Learning: an Enterprise Framework White Paper V0.1
- Authors: Heiko Ludwig, Nathalie Baracaldo, Gegi Thomas, Yi Zhou, Ali Anwar,
Shashank Rajamoni, Yuya Ong, Jayaram Radhakrishnan, Ashish Verma, Mathieu
Sinn, Mark Purcell, Ambrish Rawat, Tran Minh, Naoise Holohan, Supriyo
Chakraborty, Shalisha Whitherspoon, Dean Steuer, Laura Wynter, Hifaz Hassan,
Sean Laguna, Mikhail Yurochkin, Mayank Agarwal, Ebube Chuba, Annie Abay
- Abstract summary: Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single place.
The framework applies to both Deep Neural Networks as well as traditional'' approaches for the most common machine learning libraries.
- Score: 28.21579297214125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an approach to conduct machine learning without
centralizing training data in a single place, for reasons of privacy,
confidentiality or data volume. However, solving federated machine learning
problems raises issues above and beyond those of centralized machine learning.
These issues include setting up communication infrastructure between parties,
coordinating the learning process, integrating party results, understanding the
characteristics of the training data sets of different participating parties,
handling data heterogeneity, and operating with the absence of a verification
data set.
IBM Federated Learning provides infrastructure and coordination for federated
learning. Data scientists can design and run federated learning jobs based on
existing, centralized machine learning models and can provide high-level
instructions on how to run the federation. The framework applies to both Deep
Neural Networks as well as ``traditional'' approaches for the most common
machine learning libraries. {\proj} enables data scientists to expand their
scope from centralized to federated machine learning, minimizing the learning
curve at the outset while also providing the flexibility to deploy to different
compute environments and design custom fusion algorithms.
Related papers
- FedSR: A Semi-Decentralized Federated Learning Algorithm for Non-IIDness in IoT System [2.040586739710704]
In the Industrial Internet of Things (IoT), a large amount of data will be generated every day.
Due to privacy and security issues, it is difficult to collect all these data together to train deep learning models.
In this paper, we combine centralized federated learning with decentralized federated learning to design a semi-decentralized cloud-edge-device hierarchical federated learning framework.
arXiv Detail & Related papers (2024-03-19T09:34:01Z) - Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - Towards Privacy-Aware Causal Structure Learning in Federated Setting [27.5652887311069]
We study a privacy-aware causal structure learning problem in the federated setting.
We propose a novel Federated PC (FedPC) algorithm with two new strategies for preserving data privacy without centralizing data.
arXiv Detail & Related papers (2022-11-13T14:54:42Z) - Privacy-Preserving Machine Learning for Collaborative Data Sharing via
Auto-encoder Latent Space Embeddings [57.45332961252628]
Privacy-preserving machine learning in data-sharing processes is an ever-critical task.
This paper presents an innovative framework that uses Representation Learning via autoencoders to generate privacy-preserving embedded data.
arXiv Detail & Related papers (2022-11-10T17:36:58Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Efficient Fully Distributed Federated Learning with Adaptive Local Links [1.8416014644193066]
We propose a fully distributed, diffusion-based learning algorithm that does not require a central server.
By adopting a classification task on the MNIST dataset, the efficacy of the proposed algorithm is demonstrated.
arXiv Detail & Related papers (2022-03-23T09:03:54Z) - DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering [1.174402845822043]
Machine learning models based on sensitive data in the real-world promise advances in areas ranging from medical screening to disease outbreaks, agriculture, industry, defense science, and more.
In many applications, learning participant communication rounds benefit from collecting their own private data sets, teaching detailed machine learning models on the real data, and sharing the benefits of using these models.
Due to existing privacy and security concerns, most people avoid sensitive data sharing for training. Without each user demonstrating their local data to a central server, Federated Learning allows various parties to train a machine learning algorithm on their shared data jointly.
arXiv Detail & Related papers (2021-11-07T15:14:29Z) - From Distributed Machine Learning to Federated Learning: A Survey [49.7569746460225]
Federated learning emerges as an efficient approach to exploit distributed data and computing resources.
We propose a functional architecture of federated learning systems and a taxonomy of related techniques.
We present the distributed training, data communication, and security of FL systems.
arXiv Detail & Related papers (2021-04-29T14:15:11Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.