A Three-Way Knot: Privacy, Fairness, and Predictive Performance Dynamics
- URL: http://arxiv.org/abs/2306.15567v1
- Date: Tue, 27 Jun 2023 15:46:22 GMT
- Title: A Three-Way Knot: Privacy, Fairness, and Predictive Performance Dynamics
- Authors: T\^ania Carvalho, Nuno Moniz and Lu\'is Antunes
- Abstract summary: Two of the most critical issues are fairness and data privacy.
The balance between privacy, fairness, and predictive performance is complex.
We study this three-way tension and how the optimization of each vector impacts others.
- Score: 2.9005223064604078
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As the frontier of machine learning applications moves further into human
interaction, multiple concerns arise regarding automated decision-making. Two
of the most critical issues are fairness and data privacy. On the one hand, one
must guarantee that automated decisions are not biased against certain groups,
especially those unprotected or marginalized. On the other hand, one must
ensure that the use of personal information fully abides by privacy regulations
and that user identities are kept safe. The balance between privacy, fairness,
and predictive performance is complex. However, despite their potential
societal impact, we still demonstrate a poor understanding of the dynamics
between these optimization vectors. In this paper, we study this three-way
tension and how the optimization of each vector impacts others, aiming to
inform the future development of safe applications. In light of claims that
predictive performance and fairness can be jointly optimized, we find this is
only possible at the expense of data privacy. Overall, experimental results
show that one of the vectors will be penalized regardless of which of the three
we optimize. Nonetheless, we find promising avenues for future work in joint
optimization solutions, where smaller trade-offs are observed between the three
vectors.
Related papers
- Towards Benchmarking Privacy Vulnerabilities in Selective Forgetting with Large Language Models [28.389198065125314]
selective forgetting (also known as machine unlearning) has shown promise for privacy and data removal tasks.<n>Despite its promise, selective forgetting raises significant privacy concerns.<n>We present the first comprehensive benchmark for evaluating privacy vulnerabilities in selective forgetting.
arXiv Detail & Related papers (2025-12-19T20:04:06Z) - FAIRPLAI: A Human-in-the-Loop Approach to Fair and Private Machine Learning [0.09999629695552194]
We introduce FAIRPLAI, a framework that integrates human oversight into the design and deployment of machine learning systems.<n>Fair and Private Learning with Active Human Influence integrates human oversight into the design and deployment of machine learning systems.<n>Fairplai consistently preserves strong privacy protections while reducing fairness disparities relative to automated baselines.
arXiv Detail & Related papers (2025-11-11T19:07:46Z) - An Interactive Framework for Finding the Optimal Trade-off in Differential Privacy [20.038766371144526]
We introduce Differential privacy (DP) as the standard for privacy-preserving analysis, and introduce a fundamental trade-off between privacy guarantees and model performance.<n>In particular, we present the user with hypothetical trade-off curves and ask them to pick their preferred trade-off.<n>Our experiments on differentially private logistic regression and deep transfer learning across six real-world datasets show that our method converges to the optimal privacy-accuracy trade-off.
arXiv Detail & Related papers (2025-09-04T15:02:10Z) - PAUSE: Low-Latency and Privacy-Aware Active User Selection for Federated Learning [49.02872047060618]
Federated learning (FL) enables edge devices to collaboratively train a machine learning model without the need to share potentially private data.<n>FL poses two key challenges: First, the accumulation of privacy leakage over time, and second, communication latency.<n>We propose a method that jointly addresses the accumulation of privacy leakage and communication latency via active user selection.
arXiv Detail & Related papers (2025-03-17T13:50:35Z) - Privacy-Preserving Distributed Optimization and Learning [2.1271873498506038]
We discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation.
We introduce several differential-privacy algorithms that can simultaneously ensure privacy and optimization accuracy.
We provide example applications in several machine learning problems to confirm the real-world effectiveness of these algorithms.
arXiv Detail & Related papers (2024-02-29T22:18:05Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Differential Privacy via Distributionally Robust Optimization [8.409434654561789]
We develop a class of mechanisms that enjoy non-asymptotic and unconditional optimality guarantees.
Our upper (primal) bounds correspond to implementable perturbations whose suboptimality can be bounded by our lower (dual) bounds.
Our numerical experiments demonstrate that our perturbations can outperform the previously best results from the literature on artificial as well as standard benchmark problems.
arXiv Detail & Related papers (2023-04-25T09:31:47Z) - Best Practices for 2-Body Pose Forecasting [58.661899246497896]
We review the progress in human pose forecasting and provide an in-depth assessment of the single-person practices that perform best.
Other single-person practices do not transfer to 2-body, so the proposed best ones do not include hierarchical body modeling or attention-based interaction encoding.
Our proposed 2-body pose forecasting best practices yield a performance improvement of 21.9% over the state-of-the-art on the most recent ExPI dataset.
arXiv Detail & Related papers (2023-04-12T10:46:23Z) - Privacy-Preserving Distributed Expectation Maximization for Gaussian
Mixture Model using Subspace Perturbation [4.2698418800007865]
federated learning is motivated by the privacy concern as it does not allow to transmit private data but only intermediate updates.
We propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each step.
Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.
arXiv Detail & Related papers (2022-09-16T09:58:03Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Towards a Data Privacy-Predictive Performance Trade-off [2.580765958706854]
We evaluate the existence of a trade-off between data privacy and predictive performance in classification tasks.
Unlike previous literature, we confirm that the higher the level of privacy, the higher the impact on predictive performance.
arXiv Detail & Related papers (2022-01-13T21:48:51Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.