Efficient Fairness-Performance Pareto Front Computation
- URL: http://arxiv.org/abs/2409.17643v1
- Date: Thu, 26 Sep 2024 08:46:48 GMT
- Title: Efficient Fairness-Performance Pareto Front Computation
- Authors: Mark Kozdoba, Binyamin Perets and Shie Mannor
- Abstract summary: We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
- Score: 51.558848491038916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a well known intrinsic trade-off between the fairness of a
representation and the performance of classifiers derived from the
representation. Due to the complexity of optimisation algorithms in most modern
representation learning approaches, for a given method it may be non-trivial to
decide whether the obtained fairness-performance curve of the method is
optimal, i.e., whether it is close to the true Pareto front for these
quantities for the underlying data distribution.
In this paper we propose a new method to compute the optimal Pareto front,
which does not require the training of complex representation models. We show
that optimal fair representations possess several useful structural properties,
and that these properties enable a reduction of the computation of the Pareto
Front to a compact discrete problem. We then also show that these compact
approximating problems can be efficiently solved via off-the shelf
concave-convex programming methods.
Since our approach is independent of the specific model of representations,
it may be used as the benchmark to which representation learning algorithms may
be compared. We experimentally evaluate the approach on a number of real world
benchmark datasets.
Related papers
- On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Sparse Variational Student-t Processes [8.46450148172407]
Student-t Processes are used to model heavy-tailed distributions and datasets with outliers.
We propose a sparse representation framework to allow Student-t Processes to be more flexible for real-world datasets.
We evaluate two proposed approaches on various synthetic and real-world datasets from UCI and Kaggle.
arXiv Detail & Related papers (2023-12-09T12:55:20Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Efficient Alternating Minimization Solvers for Wyner Multi-View
Unsupervised Learning [0.0]
We propose two novel formulations that enable the development of computational efficient solvers based the alternating principle.
The proposed solvers offer computational efficiency, theoretical convergence guarantees, local minima complexity with the number of views, and exceptional accuracy as compared with the state-of-the-art techniques.
arXiv Detail & Related papers (2023-03-28T10:17:51Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Convex Shape Representation with Binary Labels for Image Segmentation:
Models and Fast Algorithms [7.847719964338735]
We present a novel and effective binary representation for convex shapes.
We show the equivalence between the shape convexity and some properties of the associated indicator function.
arXiv Detail & Related papers (2020-02-22T01:55:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.