Oblivious Data for Fairness with Kernels
- URL: http://arxiv.org/abs/2002.02901v2
- Date: Thu, 19 Nov 2020 19:44:18 GMT
- Title: Oblivious Data for Fairness with Kernels
- Authors: Steffen Gr\"unew\"alder and Azadeh Khaleghi
- Abstract summary: We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive features are available.
Our key ingredient for generating such oblivious features is a Hilbert-space-valued conditional expectation.
We propose a plug-in approach and demonstrate how the estimation errors can be controlled.
- Score: 1.599072005190786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the problem of algorithmic fairness in the case where
sensitive and non-sensitive features are available and one aims to generate
new, `oblivious', features that closely approximate the non-sensitive features,
and are only minimally dependent on the sensitive ones. We study this question
in the context of kernel methods. We analyze a relaxed version of the Maximum
Mean Discrepancy criterion which does not guarantee full independence but makes
the optimization problem tractable. We derive a closed-form solution for this
relaxed optimization problem and complement the result with a study of the
dependencies between the newly generated features and the sensitive ones. Our
key ingredient for generating such oblivious features is a Hilbert-space-valued
conditional expectation, which needs to be estimated from data. We propose a
plug-in approach and demonstrate how the estimation errors can be controlled.
While our techniques help reduce the bias, we would like to point out that no
post-processing of any dataset could possibly serve as an alternative to
well-designed experiments.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - A randomized algorithm for nonconvex minimization with inexact evaluations and complexity guarantees [7.08249229857925]
We consider minimization of a smooth non oracle function with inexact to gradient Hessian.
A novel feature of our method is that if an approximate direction of negative curvature is chosen, we choose a sense relax to be negative with equal gradients.
arXiv Detail & Related papers (2023-10-28T22:57:56Z) - Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage [100.8180383245813]
We propose value-based algorithms for offline reinforcement learning (RL)
We show an analogous result for vanilla Q-functions under a soft margin condition.
Our algorithms' loss functions arise from casting the estimation problems as nonlinear convex optimization problems and Lagrangifying.
arXiv Detail & Related papers (2023-02-05T14:22:41Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - A sampling criterion for constrained Bayesian optimization with
uncertainties [0.0]
We consider the problem of chance constrained optimization where it is sought to optimize a function and satisfy constraints, both of which are affected by uncertainties.
To tackle such problems, we propose a new Bayesian optimization method.
It applies to the situation where the uncertainty comes from some of the inputs, so that it becomes possible to define an acquisition criterion in the joint controlled-uncontrolled input space.
arXiv Detail & Related papers (2021-03-09T20:35:56Z) - Kernel k-Means, By All Means: Algorithms and Strong Consistency [21.013169939337583]
Kernel $k$ clustering is a powerful tool for unsupervised learning of non-linear data.
In this paper, we generalize results leveraging a general family of means to combat sub-optimal local solutions.
Our algorithm makes use of majorization-minimization (MM) to better solve this non-linear separation problem.
arXiv Detail & Related papers (2020-11-12T16:07:18Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - High-Dimensional Robust Mean Estimation via Gradient Descent [73.61354272612752]
We show that the problem of robust mean estimation in the presence of a constant adversarial fraction can be solved by gradient descent.
Our work establishes an intriguing connection between the near non-lemma estimation and robust statistics.
arXiv Detail & Related papers (2020-05-04T10:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.