Recommendation Fairness in Social Networks Over Time
- URL: http://arxiv.org/abs/2402.03450v2
- Date: Tue, 7 May 2024 13:02:36 GMT
- Title: Recommendation Fairness in Social Networks Over Time
- Authors: Meng Cao, Hussain Hussain, Sandipan Sikdar, Denis Helic, Markus Strohmaier, Roman Kern,
- Abstract summary: We study the evolution of recommendation fairness over time and its relation to dynamic network properties.
Our results suggest that recommendation fairness improves over time, regardless of the recommendation method.
Two network properties, minority ratio, and homophily ratio, exhibit stable correlations with fairness over time.
- Score: 20.27386486425493
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In social recommender systems, it is crucial that the recommendation models provide equitable visibility for different demographic groups, such as gender or race. Most existing research has addressed this problem by only studying individual static snapshots of networks that typically change over time. To address this gap, we study the evolution of recommendation fairness over time and its relation to dynamic network properties. We examine three real-world dynamic networks by evaluating the fairness of six recommendation algorithms and analyzing the association between fairness and network properties over time. We further study how interventions on network properties influence fairness by examining counterfactual scenarios with alternative evolution outcomes and differing network properties. Our results on empirical datasets suggest that recommendation fairness improves over time, regardless of the recommendation method. We also find that two network properties, minority ratio, and homophily ratio, exhibit stable correlations with fairness over time. Our counterfactual study further suggests that an extreme homophily ratio potentially contributes to unfair recommendations even with a balanced minority ratio. Our work provides insights into the evolution of fairness within dynamic networks in social science. We believe that our findings will help system operators and policymakers to better comprehend the implications of temporal changes and interventions targeting fairness in social networks.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction [44.00932300499947]
We study how degree bias in networks affects Graph Convolutional Network (GCN) link prediction.
GCNs with a symmetric normalized graph filter have a within-group preferential attachment bias.
We propose a new within-group fairness metric, which quantifies disparities in link prediction scores within social groups.
arXiv Detail & Related papers (2023-09-29T17:26:44Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Delayed and Indirect Impacts of Link Recommendations [23.583662580148133]
We study the impacts of recommendations on social networks in dynamic settings.
We find that link recommendations have surprising delayed and indirect effects on the structural properties of networks.
We show that, in counterfactual simulations, removing the indirect effects of link recommendations can make the network trend faster toward what it would have been under natural growth dynamics.
arXiv Detail & Related papers (2023-03-17T00:09:19Z) - Stimulative Training of Residual Networks: A Social Psychology
Perspective of Loafing [86.69698062642055]
Residual networks have shown great success and become indispensable in today's deep models.
We aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing.
We propose a new training strategy to strengthen the performance of residual networks.
arXiv Detail & Related papers (2022-10-09T03:15:51Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Probabilistic Verification of Neural Networks Against Group Fairness [21.158245095699456]
We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
arXiv Detail & Related papers (2021-07-18T04:34:31Z) - FaiR-N: Fair and Robust Neural Networks for Structured Data [10.14835182649819]
We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary.
We show that training with this loss yields more fair and robust neural networks with similar accuracies to models trained without it.
arXiv Detail & Related papers (2020-10-13T01:53:15Z) - Fairness Perception from a Network-Centric Perspective [12.261689483681147]
We introduce a novel yet intuitive function known as network-centric fairness perception.
We show how the function can be extended to a group fairness metric known as fairness visibility.
We illustrate a potential pitfall of the fairness visibility measure that can be exploited to mislead individuals into perceiving that the algorithmic decisions are fair.
arXiv Detail & Related papers (2020-10-07T06:35:03Z) - Adversarial Training Reduces Information and Improves Transferability [81.59364510580738]
Recent results show that features of adversarially trained networks for classification, in addition to being robust, enable desirable properties such as invertibility.
We show that the Adversarial Training can improve linear transferability to new tasks, from which arises a new trade-off between transferability of representations and accuracy on the source task.
arXiv Detail & Related papers (2020-07-22T08:30:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.