Federated Ensemble YOLOv5 -- A Better Generalized Object Detection
Algorithm
- URL: http://arxiv.org/abs/2306.17829v2
- Date: Fri, 25 Aug 2023 17:08:34 GMT
- Title: Federated Ensemble YOLOv5 -- A Better Generalized Object Detection
Algorithm
- Authors: Vinit Hegiste, Tatjana Legler and Martin Ruskowski
- Abstract summary: Federated learning (FL) has gained significant traction as a privacy-preserving algorithm.
This paper examines the application of FL to object detection as a method to enhance generalizability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning (FL) has gained significant traction as a
privacy-preserving algorithm, but the underlying resemblances of federated
learning algorithms like Federated averaging (FedAvg) or Federated SGD (Fed
SGD) to ensemble learning algorithms have not been fully explored. The purpose
of this paper is to examine the application of FL to object detection as a
method to enhance generalizability, and to compare its performance against a
centralized training approach for an object detection algorithm. Specifically,
we investigate the performance of a YOLOv5 model trained using FL across
multiple clients and employ a random sampling strategy without replacement, so
each client holds a portion of the same dataset used for centralized training.
Our experimental results showcase the superior efficiency of the FL object
detector's global model in generating accurate bounding boxes for unseen
objects, with the test set being a mixture of objects from two distinct clients
not represented in the training dataset. These findings suggest that FL can be
viewed from an ensemble algorithm perspective, akin to a synergistic blend of
Bagging and Boosting techniques. As a result, FL can be seen not only as a
method to enhance privacy, but also as a method to enhance the performance of a
machine learning model.
Related papers
- Not All Federated Learning Algorithms Are Created Equal: A Performance Evaluation Study [1.9265466185360185]
Federated Learning (FL) emerged as a practical approach to training a model from decentralized data.
To bridge this gap, we conduct extensive performance evaluation on several canonical FL algorithms.
Our comprehensive measurement study reveals that no single algorithm works best across different performance metrics.
arXiv Detail & Related papers (2024-03-26T00:33:49Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Efficient Cluster Selection for Personalized Federated Learning: A
Multi-Armed Bandit Approach [2.5477011559292175]
Federated learning (FL) offers a decentralized training approach for machine learning models, prioritizing data privacy.
In this paper, we introduce a dynamic Upper Confidence Bound (dUCB) algorithm inspired by the multi-armed bandit (MAB) approach.
arXiv Detail & Related papers (2023-10-29T16:46:50Z) - Federated Object Detection for Quality Inspection in Shared Production [0.0]
Federated learning (FL) has emerged as a promising approach for training machine learning models on decentralized data without compromising data privacy.
We propose a FL algorithm for object detection in quality inspection tasks using YOLOv5 as the object detection algorithm and Federated Averaging (FedAvg) as the FL algorithm.
arXiv Detail & Related papers (2023-06-30T13:33:27Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Ensemble Model-based Reinforcement Learning in Edge Computing [21.840086997141498]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm.
We propose a novel FRL algorithm that effectively incorporates model-based RL and ensemble knowledge distillation into FL for the first time.
Specifically, we utilise FL and knowledge distillation to create an ensemble of dynamics models for clients, and then train the policy by solely using the ensemble model without interacting with the environment.
arXiv Detail & Related papers (2021-09-12T16:19:10Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Pairwise Similarity Knowledge Transfer for Weakly Supervised Object
Localization [53.99850033746663]
We study the problem of learning localization model on target classes with weakly supervised image labels.
In this work, we argue that learning only an objectness function is a weak form of knowledge transfer.
Experiments on the COCO and ILSVRC 2013 detection datasets show that the performance of the localization model improves significantly with the inclusion of pairwise similarity function.
arXiv Detail & Related papers (2020-03-18T17:53:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.