Fairness without Demographics through Learning Graph of Gradients
- URL: http://arxiv.org/abs/2412.03706v2
- Date: Sun, 29 Dec 2024 18:33:27 GMT
- Title: Fairness without Demographics through Learning Graph of Gradients
- Authors: Yingtao Luo, Zhixun Li, Qiang Liu, Jun Zhu,
- Abstract summary: We show that the correlation between gradients and groups can help identify and improve group fairness.
Our method is robust to noise and can improve fairness significantly without decreasing the overall accuracy too much.
- Score: 22.260763111752805
- License:
- Abstract: Machine learning systems are notoriously prone to biased predictions about certain demographic groups, leading to algorithmic fairness issues. Due to privacy concerns and data quality problems, some demographic information may not be available in the training data and the complex interaction of different demographics can lead to a lot of unknown minority subpopulations, which all limit the applicability of group fairness. Many existing works on fairness without demographics assume the correlation between groups and features. However, we argue that the model gradients are also valuable for fairness without demographics. In this paper, we show that the correlation between gradients and groups can help identify and improve group fairness. With an adversarial weighting architecture, we construct a graph where samples with similar gradients are connected and learn the weights of different samples from it. Unlike the surrogate grouping methods that cluster groups from features and labels as proxy sensitive attribute, our method leverages the graph structure as a soft grouping mechanism, which is much more robust to noises. The results show that our method is robust to noise and can improve fairness significantly without decreasing the overall accuracy too much.
Related papers
- GroupFace: Imbalanced Age Estimation Based on Multi-hop Attention Graph Convolutional Network and Group-aware Margin Optimization [13.197551708300345]
We propose an innovative collaborative learning framework that integrates a multi-hop attention graph convolutional network and a group-aware margin strategy.
Our architecture achieves excellent performance on several age estimation benchmark datasets.
arXiv Detail & Related papers (2024-12-16T05:08:15Z) - Migrate Demographic Group For Fair GNNs [23.096452685455134]
Graph Neural networks (GNNs) have been applied in many scenarios due to the superior performance of graph learning.
FairMigration is composed of two training stages. In the first stage, the GNNs are initially optimized by personalized self-supervised learning.
In the second stage, the new demographic groups are frozen and supervised learning is carried out under the constraints of new demographic groups and adversarial training.
arXiv Detail & Related papers (2023-06-07T07:37:01Z) - FairGen: Towards Fair Graph Generation [76.34239875010381]
We propose a fairness-aware graph generative model named FairGen.
Our model jointly trains a label-informed graph generation module and a fair representation learning module.
Experimental results on seven real-world data sets, including web-based graphs, demonstrate that FairGen obtains performance on par with state-of-the-art graph generative models.
arXiv Detail & Related papers (2023-03-30T23:30:42Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Outlier-Robust Group Inference via Gradient Space Clustering [50.87474101594732]
Existing methods can improve the worst-group performance, but they require group annotations, which are often expensive and sometimes infeasible to obtain.
We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters.
We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN.
arXiv Detail & Related papers (2022-10-13T06:04:43Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Fair Community Detection and Structure Learning in Heterogeneous
Graphical Models [8.643517734716607]
Inference of community structure in probabilistic graphical models may not be consistent with fairness constraints when nodes have demographic attributes.
This paper defines a novel $ell_$-regularized pseudo-likelihood approach for fair graphical model selection.
arXiv Detail & Related papers (2021-12-09T18:58:36Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Contrastive Examples for Addressing the Tyranny of the Majority [83.93825214500131]
We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened.
We show that current generative adversarial networks are a powerful tool for learning these data points, called contrastive examples.
arXiv Detail & Related papers (2020-04-14T14:06:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.