Post-Training Fairness Control: A Single-Train Framework for Dynamic Fairness in Recommendation
- URL: http://arxiv.org/abs/2601.20848v1
- Date: Wed, 28 Jan 2026 18:48:43 GMT
- Title: Post-Training Fairness Control: A Single-Train Framework for Dynamic Fairness in Recommendation
- Authors: Weixin Chen, Li Chen, Yuhan Zhao,
- Abstract summary: Cofair is a single-train framework that enables post-training fairness control in recommendation.<n>Our framework provides dynamic fairness at different levels, delivering comparable or better fairness-accuracy curves than state-of-the-art baselines.
- Score: 13.743015531030473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite growing efforts to mitigate unfairness in recommender systems, existing fairness-aware methods typically fix the fairness requirement at training time and provide limited post-training flexibility. However, in real-world scenarios, diverse stakeholders may demand differing fairness requirements over time, so retraining for different fairness requirements becomes prohibitive. To address this limitation, we propose Cofair, a single-train framework that enables post-training fairness control in recommendation. Specifically, Cofair introduces a shared representation layer with fairness-conditioned adapter modules to produce user embeddings specialized for varied fairness levels, along with a user-level regularization term that guarantees user-wise monotonic fairness improvements across these levels. We theoretically establish that the adversarial objective of Cofair upper bounds demographic parity and the regularization term enforces progressive fairness at user level. Comprehensive experiments on multiple datasets and backbone models demonstrate that our framework provides dynamic fairness at different levels, delivering comparable or better fairness-accuracy curves than state-of-the-art baselines, without the need to retrain for each new fairness requirement. Our code is publicly available at https://github.com/weixinchen98/Cofair.
Related papers
- Fairness Aware Reward Optimization [78.85867531002346]
We introduce Fairness Aware Reward Optimization (Faro), an in-processing framework that trains reward models under demographic parity, equalized odds, or counterfactual fairness constraints.<n>We provide the first theoretical analysis of reward-level fairness in LLM alignment.<n>Faro significantly reduces bias and harmful generations while maintaining or improving model quality.
arXiv Detail & Related papers (2026-02-08T03:35:49Z) - FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation [5.195640929313995]
Federated Learning (FL) enables collaborative model training across multiple clients without sharing clients' private data.<n>The diverse and often conflicting biases present across clients pose significant challenges to model fairness.<n>Current fairness-enhancing FL solutions often fall short, as they typically mitigate biases for a single, usually binary, sensitive attribute, while ignoring the heterogeneous fairness needs that exist in real-world settings.
arXiv Detail & Related papers (2025-06-26T08:43:12Z) - FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [23.38141950440522]
We propose a controllable federated group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>We show that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - FedTilt: Towards Multi-Level Fairness-Preserving and Robust Federated Learning [12.713572267830658]
textttFedTilt is a novel FL that can preserve multi-level fairness and be robust to outliers.<n>We show how tuning tilt values can achieve the two-level fairness and mitigate the persistent outliers.
arXiv Detail & Related papers (2025-03-15T19:57:23Z) - Towards Cohesion-Fairness Harmony: Contrastive Regularization in
Individual Fair Graph Clustering [5.255750357176021]
iFairNMTF is an individual Fairness Nonnegative Matrix Tri-Factorization model with contrastive fairness regularization.
Our model allows for customizable accuracy-fairness trade-offs, thereby enhancing user autonomy.
arXiv Detail & Related papers (2024-02-16T15:25:56Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - Fair Mixup: Fairness via Interpolation [28.508444261249423]
We propose fair mixup, a new data augmentation strategy for imposing the fairness constraint.
We show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups.
We empirically show that it ensures a better generalization for both accuracy and fairness measurement in benchmarks.
arXiv Detail & Related papers (2021-03-11T06:57:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.