SoK: Taming the Triangle -- On the Interplays between Fairness,
Interpretability and Privacy in Machine Learning
- URL: http://arxiv.org/abs/2312.16191v1
- Date: Fri, 22 Dec 2023 08:11:33 GMT
- Title: SoK: Taming the Triangle -- On the Interplays between Fairness,
Interpretability and Privacy in Machine Learning
- Authors: Julien Ferry (LAAS-ROC), Ulrich A\"ivodji (ETS), S\'ebastien Gambs
(UQAM), Marie-Jos\'e Huguet (LAAS-ROC), Mohamed Siala (LAAS-ROC)
- Abstract summary: Machine learning techniques are increasingly used for high-stakes decision-making.
It is crucial to ensure that the models learnt can be audited or understood by human users.
interpretability, fairness and privacy are key requirements for the development of responsible machine learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning techniques are increasingly used for high-stakes
decision-making, such as college admissions, loan attribution or recidivism
prediction. Thus, it is crucial to ensure that the models learnt can be audited
or understood by human users, do not create or reproduce discrimination or
bias, and do not leak sensitive information regarding their training data.
Indeed, interpretability, fairness and privacy are key requirements for the
development of responsible machine learning, and all three have been studied
extensively during the last decade. However, they were mainly considered in
isolation, while in practice they interplay with each other, either positively
or negatively. In this Systematization of Knowledge (SoK) paper, we survey the
literature on the interactions between these three desiderata. More precisely,
for each pairwise interaction, we summarize the identified synergies and
tensions. These findings highlight several fundamental theoretical and
empirical conflicts, while also demonstrating that jointly considering these
different requirements is challenging when one aims at preserving a high level
of utility. To solve this issue, we also discuss possible conciliation
mechanisms, showing that a careful design can enable to successfully handle
these different concerns in practice.
Related papers
- Learning Multimodal Cues of Children's Uncertainty [19.349368123567658]
We present a dataset annotated in collaboration with developmental and cognitive psychologists for the purpose of studying nonverbal cues of uncertainty.
We then present an analysis of the data, studying different roles of uncertainty and its relationship with task difficulty and performance.
Lastly, we present a multimodal machine learning model that can predict uncertainty given a real-time video clip of a participant.
arXiv Detail & Related papers (2024-10-17T21:46:00Z) - Between Randomness and Arbitrariness: Some Lessons for Reliable Machine Learning at Scale [2.50194939587674]
dissertation: quantifying and mitigating sources of arbitiness in ML, randomness in uncertainty estimation and optimization algorithms, in order to achieve scalability without sacrificing reliability.
dissertation serves as an empirical proof by example that research on reliable measurement for machine learning is intimately bound up with research in law and policy.
arXiv Detail & Related papers (2024-06-13T19:29:37Z) - Self-Distilled Disentangled Learning for Counterfactual Prediction [49.84163147971955]
We propose the Self-Distilled Disentanglement framework, known as $SD2$.
Grounded in information theory, it ensures theoretically sound independent disentangled representations without intricate mutual information estimator designs.
Our experiments, conducted on both synthetic and real-world datasets, confirm the effectiveness of our approach.
arXiv Detail & Related papers (2024-06-09T16:58:19Z) - Interactive Ontology Matching with Cost-Efficient Learning [2.006461411375746]
This work introduces DualLoop, an active learning method tailored to matching.
Compared to existing active learning methods, we consistently achieved better F1 scores and recall.
We report our operational performance results within the Architecture, Engineering, Construction (AEC) industry sector.
arXiv Detail & Related papers (2024-04-11T11:53:14Z) - Towards Trustworthy and Aligned Machine Learning: A Data-centric Survey
with Causality Perspectives [11.63431725146897]
The trustworthiness of machine learning has emerged as a critical topic in the field.
This survey presents the background of trustworthy machine learning development using a unified set of concepts.
We provide a unified language with mathematical vocabulary to link these methods across robustness, adversarial robustness, interpretability, and fairness.
arXiv Detail & Related papers (2023-07-31T17:11:35Z) - Unraveling the Interconnected Axes of Heterogeneity in Machine Learning
for Democratic and Inclusive Advancements [16.514990457235932]
We identify and analyze three axes of heterogeneity that significantly influence the trajectory of machine learning products.
We demonstrate how these axes are interdependent and mutually influence one another, emphasizing the need to consider and address them jointly.
We discuss how this fragmented study of the three axes poses a significant challenge, leading to an impractical solution space that lacks reflection of real-world scenarios.
arXiv Detail & Related papers (2023-06-11T20:47:58Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Quiz-based Knowledge Tracing [61.9152637457605]
Knowledge tracing aims to assess individuals' evolving knowledge states according to their learning interactions.
QKT achieves state-of-the-art performance compared to existing methods.
arXiv Detail & Related papers (2023-04-05T12:48:42Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.