FAIRPLAI: A Human-in-the-Loop Approach to Fair and Private Machine Learning
- URL: http://arxiv.org/abs/2511.08702v1
- Date: Thu, 13 Nov 2025 01:02:46 GMT
- Title: FAIRPLAI: A Human-in-the-Loop Approach to Fair and Private Machine Learning
- Authors: David Sanchez, Holly Lopez, Michelle Buraczyk, Anantaa Kotal,
- Abstract summary: We introduce FAIRPLAI, a framework that integrates human oversight into the design and deployment of machine learning systems.<n>Fair and Private Learning with Active Human Influence integrates human oversight into the design and deployment of machine learning systems.<n>Fairplai consistently preserves strong privacy protections while reducing fairness disparities relative to automated baselines.
- Score: 0.09999629695552194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning systems move from theory to practice, they are increasingly tasked with decisions that affect healthcare access, financial opportunities, hiring, and public services. In these contexts, accuracy is only one piece of the puzzle - models must also be fair to different groups, protect individual privacy, and remain accountable to stakeholders. Achieving all three is difficult: differential privacy can unintentionally worsen disparities, fairness interventions often rely on sensitive data that privacy restricts, and automated pipelines ignore that fairness is ultimately a human and contextual judgment. We introduce FAIRPLAI (Fair and Private Learning with Active Human Influence), a practical framework that integrates human oversight into the design and deployment of machine learning systems. FAIRPLAI works in three ways: (1) it constructs privacy-fairness frontiers that make trade-offs between accuracy, privacy guarantees, and group outcomes transparent; (2) it enables interactive stakeholder input, allowing decision-makers to select fairness criteria and operating points that reflect their domain needs; and (3) it embeds a differentially private auditing loop, giving humans the ability to review explanations and edge cases without compromising individual data security. Applied to benchmark datasets, FAIRPLAI consistently preserves strong privacy protections while reducing fairness disparities relative to automated baselines. More importantly, it provides a straightforward, interpretable process for practitioners to manage competing demands of accuracy, privacy, and fairness in socially impactful applications. By embedding human judgment where it matters most, FAIRPLAI offers a pathway to machine learning systems that are effective, responsible, and trustworthy in practice. GitHub: https://github.com/Li1Davey/Fairplai
Related papers
- From Statistical Disclosure Control to Fair AI: Navigating Fundamental Tradeoffs in Differential Privacy [0.0]
Differential privacy has become the gold standard for privacy-preserving machine learning systems.<n>This paper provides a systematic treatment connecting three threads: Dalenius's impossibility results for semantic privacy, Dwork's differential privacy as an achievable alternative, and emerging impossibility results from the addition of a fairness requirement.
arXiv Detail & Related papers (2026-01-25T17:07:00Z) - Fairness Meets Privacy: Integrating Differential Privacy and Demographic Parity in Multi-class Classification [6.28122931748758]
We show that differential privacy can be integrated into a fairness-enhancing pipeline with minimal impact on fairness guarantees.<n>We design a postprocessing algorithm, called DP2DP, that enforces both demographic parity and differential privacy.<n>Our analysis reveals that our algorithm converges towards its demographic parity objective at essentially the same rate as the best non-private methods from the literature.
arXiv Detail & Related papers (2025-11-24T08:31:02Z) - Accurate Target Privacy Preserving Federated Learning Balancing Fairness and Utility [28.676852732262407]
Federated Learning (FL) enables collaborative model training without data sharing.<n>We introduce a differentially private fair FL algorithm that transforms this multi-objective optimization into a zero-sum game.<n>Our theoretical analysis reveals a surprising inverse relationship, i.e., stricter privacy protection limits the system's ability to detect and correct demographic biases.
arXiv Detail & Related papers (2025-10-30T07:14:55Z) - Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated Learning: A Step Towards Responsible AI [6.671649946926508]
We present the first unified large-scale empirical study of privacy-fairness-utility trade-offs in Federated Learning (FL)<n>We compare fairness-awares with Differential Privacy (DP), Homomorphic Encryption (HE), and Secure Multi-Party Encryption (SMC)<n>We uncover unexpected interactions: DP mechanisms can negatively impact fairness, skew, and fairness-awares can inadvertently reduce privacy effectiveness.
arXiv Detail & Related papers (2025-03-20T15:31:01Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - A Three-Way Knot: Privacy, Fairness, and Predictive Performance Dynamics [2.9005223064604078]
Two of the most critical issues are fairness and data privacy.
The balance between privacy, fairness, and predictive performance is complex.
We study this three-way tension and how the optimization of each vector impacts others.
arXiv Detail & Related papers (2023-06-27T15:46:22Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Differential Privacy and Fairness in Decisions and Learning Tasks: A
Survey [50.90773979394264]
It reviews the conditions under which privacy and fairness may have aligned or contrasting goals.
It analyzes how and why DP may exacerbate bias and unfairness in decision problems and learning tasks.
arXiv Detail & Related papers (2022-02-16T16:50:23Z) - On the Privacy Risks of Algorithmic Fairness [9.429448411561541]
We study the privacy risks of group fairness through the lens of membership inference attacks.
We show that fairness comes at the cost of privacy, and this cost is not distributed equally.
arXiv Detail & Related papers (2020-11-07T09:15:31Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.