CalFAT: Calibrated Federated Adversarial Training with Label Skewness
- URL: http://arxiv.org/abs/2205.14926v1
- Date: Mon, 30 May 2022 08:49:20 GMT
- Title: CalFAT: Calibrated Federated Adversarial Training with Label Skewness
- Authors: Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu
- Abstract summary: We propose a Calibrated FAT (CalFAT) approach to tackle the instability issue by calibrating the logits adaptively to balance the classes.
We show both theoretically and empirically that the optimization of CalFAT leads to homogeneous local models across the clients and much improved convergence rate and final performance.
- Score: 46.47690793066599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that, like traditional machine learning, federated
learning (FL) is also vulnerable to adversarial attacks. To improve the
adversarial robustness of FL, few federated adversarial training (FAT) methods
have been proposed to apply adversarial training locally before global
aggregation. Although these methods demonstrate promising results on
independent identically distributed (IID) data, they suffer from training
instability issues on non-IID data with label skewness, resulting in much
degraded natural accuracy. This tends to hinder the application of FAT in
real-world applications where the label distribution across the clients is
often skewed. In this paper, we study the problem of FAT under label skewness,
and firstly reveal one root cause of the training instability and natural
accuracy degradation issues: skewed labels lead to non-identical class
probabilities and heterogeneous local models. We then propose a Calibrated FAT
(CalFAT) approach to tackle the instability issue by calibrating the logits
adaptively to balance the classes. We show both theoretically and empirically
that the optimization of CalFAT leads to homogeneous local models across the
clients and much improved convergence rate and final performance.
Related papers
- Improving Fast Adversarial Training via Self-Knowledge Guidance [30.299641184202972]
We conduct a comprehensive study of the imbalance issue in fast adversarial training (FAT)
We observe an obvious class disparity regarding their performances.
This disparity could be embodied from a perspective of alignment between clean and robust accuracy.
arXiv Detail & Related papers (2024-09-26T07:12:04Z) - CALICO: Confident Active Learning with Integrated Calibration [11.978551396144532]
We propose an AL framework that self-calibrates the confidence used for sample selection during the training process.
We show improved classification performance compared to a softmax-based classifier with fewer labeled samples.
arXiv Detail & Related papers (2024-07-02T15:05:19Z) - Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data [45.11652096723593]
Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training on devices in edge networks.
This paper proposes FatCC, which incorporates local logit underlineCalibration and global feature underlineContrast into the vanilla federated adversarial training process from both logit and feature perspectives.
arXiv Detail & Related papers (2024-04-10T06:35:25Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Exploring Vacant Classes in Label-Skewed Federated Learning [113.65301899666645]
Label skews, characterized by disparities in local label distribution across clients, pose a significant challenge in federated learning.
This paper introduces FedVLS, a novel approach to label-skewed federated learning that integrates vacant-class distillation and logit suppression simultaneously.
arXiv Detail & Related papers (2024-01-04T16:06:31Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - FedVal: Different good or different bad in federated learning [9.558549875692808]
Federated learning (FL) systems are susceptible to attacks from malicious actors.
FL poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups.
Traditional methods used to address such biases require centralized access to the data, which FL systems do not have.
We present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients.
arXiv Detail & Related papers (2023-06-06T22:11:13Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.