No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data
- URL: http://arxiv.org/abs/2106.05001v1
- Date: Wed, 9 Jun 2021 12:02:29 GMT
- Title: No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data
- Authors: Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, Jiashi Feng
- Abstract summary: A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
- Score: 78.69828864672978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central challenge in training classification models in the real-world
federated system is learning with non-IID data. To cope with this, most of the
existing works involve enforcing regularization in local optimization or
improving the model aggregation scheme at the server. Other works also share
public datasets or synthesized samples to supplement the training of
under-represented classes or introduce a certain level of personalization.
Though effective, they lack a deep understanding of how the data heterogeneity
affects each layer of a deep classification model. In this paper, we bridge
this gap by performing an experimental analysis of the representations learned
by different layers. Our observations are surprising: (1) there exists a
greater bias in the classifier than other layers, and (2) the classification
performance can be significantly improved by post-calibrating the classifier
after federated training. Motivated by the above findings, we propose a novel
and simple algorithm called Classifier Calibration with Virtual Representations
(CCVR), which adjusts the classifier using virtual representations sampled from
an approximated gaussian mixture model. Experimental results demonstrate that
CCVR achieves state-of-the-art performance on popular federated learning
benchmarks including CIFAR-10, CIFAR-100, and CINIC-10. We hope that our simple
yet effective method can shed some light on the future research of federated
learning with non-IID data.
Related papers
- FedUV: Uniformity and Variance for Heterogeneous Federated Learning [5.9330433627374815]
Federated learning is a promising framework to train neural networks with widely distributed data.
Recent work has shown this is due to the final layer of the network being most prone to local bias.
We investigate the training dynamics of the classifier by applying SVD to the weights motivated by the observation that freezing weights results in constant singular values.
arXiv Detail & Related papers (2024-02-27T15:53:15Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - No Fear of Classifier Biases: Neural Collapse Inspired Federated
Learning with Synthetic and Fixed Classifier [10.491645205483051]
We propose a solution to the FL's classifier bias problem by utilizing a synthetic and fixed ETF classifier during training.
We devise several effective modules to better adapt the ETF structure in FL, achieving both high generalization and personalization.
Our method achieves state-of-the-art performances on CIFAR-10, CIFAR-100, and Tiny-ImageNet.
arXiv Detail & Related papers (2023-03-17T15:38:39Z) - ELFIS: Expert Learning for Fine-grained Image Recognition Using Subsets [6.632855264705276]
We propose ELFIS, an expert learning framework for Fine-Grained Visual Recognition.
A set of neural networks-based experts are trained focusing on the meta-categories and are integrated into a multi-task framework.
Experiments show improvements in the SoTA FGVR benchmarks of up to +1.3% of accuracy using both CNNs and transformer-based networks.
arXiv Detail & Related papers (2023-03-16T12:45:19Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier [68.38233199030908]
Long-tail recognition tackles the natural non-uniformly distributed data in realworld scenarios.
While moderns perform well on populated classes, its performance degrades significantly on tail classes.
Deep-RTC is proposed as a new solution to the long-tail problem, combining realism with hierarchical predictions.
arXiv Detail & Related papers (2020-07-20T05:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.