Fairness in Social Influence Maximization via Optimal Transport
- URL: http://arxiv.org/abs/2406.17736v5
- Date: Thu, 30 Jan 2025 09:23:13 GMT
- Title: Fairness in Social Influence Maximization via Optimal Transport
- Authors: Shubham Chowdhary, Giulia De Pasquale, Nicolas Lanzetti, Ana-Andreea Stoica, Florian Dorfler,
- Abstract summary: We study fairness in social influence, whereby one seeks to select seeds that spread a given information throughout a network.
In this paper, we demonstrate that such fairness metrics can be misleading since they overlook the nature of information diffusion processes.
We design a new fairness metric, mutual fairness, that captures variability in outreach through optimal transport theory.
- Score: 1.0765359420035392
- License:
- Abstract: We study fairness in social influence maximization, whereby one seeks to select seeds that spread a given information throughout a network, ensuring balanced outreach among different communities (e.g. demographic groups). In the literature, fairness is often quantified in terms of the expected outreach within individual communities. In this paper, we demonstrate that such fairness metrics can be misleading since they overlook the stochastic nature of information diffusion processes. When information diffusion occurs in a probabilistic manner, multiple outreach scenarios can occur. As such, outcomes such as ``In 50% of the cases, no one in group 1 gets the information, while everyone in group 2 does, and in the other 50%, it is the opposite'', which always results in largely unfair outcomes, are classified as fair by a variety of fairness metrics in the literature. We tackle this problem by designing a new fairness metric, mutual fairness, that captures variability in outreach through optimal transport theory. We propose a new seed-selection algorithm that optimizes both outreach and mutual fairness, and we show its efficacy on several real datasets. We find that our algorithm increases fairness with only a minor decrease (and at times, even an increase) in efficiency.
Related papers
- Targeted Learning for Data Fairness [52.59573714151884]
We expand fairness inference by evaluating fairness in the data generating process itself.
We derive estimators demographic parity, equal opportunity, and conditional mutual information.
To validate our approach, we perform several simulations and apply our estimators to real data.
arXiv Detail & Related papers (2025-02-06T18:51:28Z) - Falcon: Fair Active Learning using Multi-armed Bandits [9.895979687746376]
We propose a data-centric approach that improves machine learning model fairness via strategic sample selection.
Experiments show that Falcon significantly outperforms existing fair active learning approaches in terms of fairness and accuracy.
In particular, only Falcon supports a proper trade-off between accuracy and fairness where its maximum fairness score is 1.8-4.5x higher than the second-best results.
arXiv Detail & Related papers (2024-01-23T12:48:27Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Navigating Fairness Measures and Trade-Offs [0.0]
I show that by using Rawls' notion of justice as fairness, we can create a basis for navigating fairness measures and the accuracy trade-off.
This also helps to close part of the gap between philosophical accounts of distributive justice and the fairness literature.
arXiv Detail & Related papers (2023-07-17T13:45:47Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Cascaded Debiasing: Studying the Cumulative Effect of Multiple
Fairness-Enhancing Interventions [48.98659895355356]
This paper investigates the cumulative effect of multiple fairness enhancing interventions at different stages of the machine learning (ML) pipeline.
Applying multiple interventions results in better fairness and lower utility than individual interventions on aggregate.
On the downside, fairness-enhancing interventions can negatively impact different population groups, especially the privileged group.
arXiv Detail & Related papers (2022-02-08T09:20:58Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Fair Densities via Boosting the Sufficient Statistics of Exponential
Families [72.34223801798422]
We introduce a boosting algorithm to pre-process data for fairness.
Our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee.
Empirical results are present to display the quality of result on real-world data.
arXiv Detail & Related papers (2020-12-01T00:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.