StatMix: Data augmentation method that relies on image statistics in
federated learning
- URL: http://arxiv.org/abs/2207.04103v1
- Date: Fri, 8 Jul 2022 19:02:41 GMT
- Title: StatMix: Data augmentation method that relies on image statistics in
federated learning
- Authors: Dominik Lewy, Jacek Ma\'ndziuk, Maria Ganzha, Marcin Paprzycki
- Abstract summary: StatMix is an augmentation approach that uses image statistics to improve results of FL scenario(s)
In all FL experiments, application of StatMix improves the average accuracy, compared to the baseline training (with no use of StatMix)
- Score: 0.27528170226206433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Availability of large amount of annotated data is one of the pillars of deep
learning success. Although numerous big datasets have been made available for
research, this is often not the case in real life applications (e.g. companies
are not able to share data due to GDPR or concerns related to intellectual
property rights protection). Federated learning (FL) is a potential solution to
this problem, as it enables training a global model on data scattered across
multiple nodes, without sharing local data itself. However, even FL methods
pose a threat to data privacy, if not handled properly. Therefore, we propose
StatMix, an augmentation approach that uses image statistics, to improve
results of FL scenario(s). StatMix is empirically tested on CIFAR-10 and
CIFAR-100, using two neural network architectures. In all FL experiments,
application of StatMix improves the average accuracy, compared to the baseline
training (with no use of StatMix). Some improvement can also be observed in
non-FL setups.
Related papers
- StatAvg: Mitigating Data Heterogeneity in Federated Learning for Intrusion Detection Systems [22.259297167311964]
Federated learning (FL) is a decentralized learning technique that enables devices to collaboratively build a shared Machine Leaning (ML) or Deep Learning (DL) model without revealing their raw data to a third party.
Due to its privacy-preserving nature, FL has sparked widespread attention for building Intrusion Detection Systems (IDS) within the realm of cybersecurity.
We propose an effective method called Statistical Averaging (StatAvg) to alleviate non-independently and identically (non-iid) distributed features across local clients' data in FL.
arXiv Detail & Related papers (2024-05-20T14:41:59Z) - MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - The Applicability of Federated Learning to Official Statistics [0.5461938536945721]
This work investigates the potential of Federated Learning for official statistics.
It shows how well the performance of FL models can keep up with centralized learning methods.
arXiv Detail & Related papers (2023-07-28T11:58:26Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - Federated Mixture of Experts [94.25278695272874]
FedMix is a framework that allows us to train an ensemble of specialized models.
We show that users with similar data characteristics select the same members and therefore share statistical strength.
arXiv Detail & Related papers (2021-07-14T14:15:24Z) - Improving Semi-supervised Federated Learning by Reducing the Gradient
Diversity of Models [67.66144604972052]
Federated learning (FL) is a promising way to use the computing power of mobile devices while maintaining privacy of users.
We show that a critical issue that affects the test accuracy is the large gradient diversity of the models from different users.
We propose a novel grouping-based model averaging method to replace the FedAvg averaging method.
arXiv Detail & Related papers (2020-08-26T03:36:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.