Federated Learning for Predictive Maintenance and Quality Inspection in
Industrial Applications
- URL: http://arxiv.org/abs/2304.11101v1
- Date: Fri, 21 Apr 2023 16:11:09 GMT
- Title: Federated Learning for Predictive Maintenance and Quality Inspection in
Industrial Applications
- Authors: Viktorija Pruckovskaja, Axel Weissenfeld, Clemens Heistracher, Anita
Graser, Julia Kafka, Peter Leputsch, Daniel Schall, Jana Kemnitz
- Abstract summary: Federated learning (FL) enables multiple participants to develop a machine learning model without compromising privacy and confidentiality of their data.
We evaluate the performance of different FL aggregation methods and compare them to central and local training approaches.
We introduce a new federated learning dataset from a real-world quality inspection setting.
- Score: 0.36855408155998204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven machine learning is playing a crucial role in the advancements of
Industry 4.0, specifically in enhancing predictive maintenance and quality
inspection. Federated learning (FL) enables multiple participants to develop a
machine learning model without compromising the privacy and confidentiality of
their data. In this paper, we evaluate the performance of different FL
aggregation methods and compare them to central and local training approaches.
Our study is based on four datasets with varying data distributions. The
results indicate that the performance of FL is highly dependent on the data and
its distribution among clients. In some scenarios, FL can be an effective
alternative to traditional central or local training methods. Additionally, we
introduce a new federated learning dataset from a real-world quality inspection
setting.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Enhancing Data Quality in Federated Fine-Tuning of Foundation Models [54.757324343062734]
We propose a data quality control pipeline for federated fine-tuning of foundation models.
This pipeline computes scores reflecting the quality of training data and determines a global threshold for a unified standard.
Our experiments show that the proposed quality control pipeline facilitates the effectiveness and reliability of the model training, leading to better performance.
arXiv Detail & Related papers (2024-03-07T14:28:04Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Data Valuation and Detections in Federated Learning [4.899818550820576]
Federated Learning (FL) enables collaborative model training while preserving the privacy of raw data.
A challenge in this framework is the fair and efficient valuation of data, which is crucial for incentivizing clients to contribute high-quality data in the FL task.
This paper introduces a novel privacy-preserving method for evaluating client contributions and selecting relevant datasets without a pre-specified training algorithm in an FL task.
arXiv Detail & Related papers (2023-11-09T12:01:32Z) - Federated Multilingual Models for Medical Transcript Analysis [11.877236847857336]
We present a federated learning system for training a large-scale multi-lingual model.
None of the training data is ever transmitted to any central location.
We show that the global model performance can be further improved by a training step performed locally.
arXiv Detail & Related papers (2022-11-04T01:07:54Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - Improving Accuracy of Federated Learning in Non-IID Settings [11.908715869667445]
Federated Learning (FL) is a decentralized machine learning protocol that allows a set of participating agents to collaboratively train a model without sharing their data.
It has been observed that the performance of FL is closely tied with the local data distributions of agents.
In this work, we identify four simple techniques that can improve the performance of trained models without incurring any additional communication overhead to FL.
arXiv Detail & Related papers (2020-10-14T21:02:14Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.