Mean Parity Fair Regression in RKHS
- URL: http://arxiv.org/abs/2302.10409v1
- Date: Tue, 21 Feb 2023 02:44:50 GMT
- Title: Mean Parity Fair Regression in RKHS
- Authors: Shaokui Wei, Jiayin Liu, Bing Li, Hongyuan Zha
- Abstract summary: We study the fair regression problem under the notion of Mean Parity (MP) fairness.
We address this problem by leveraging reproducing kernel Hilbert space (RKHS)
We derive a corresponding regression function that can be implemented efficiently and provides interpretable tradeoffs.
- Score: 43.98593032593897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the fair regression problem under the notion of Mean Parity (MP)
fairness, which requires the conditional mean of the learned function output to
be constant with respect to the sensitive attributes. We address this problem
by leveraging reproducing kernel Hilbert space (RKHS) to construct the
functional space whose members are guaranteed to satisfy the fairness
constraints. The proposed functional space suggests a closed-form solution for
the fair regression problem that is naturally compatible with multiple
sensitive attributes. Furthermore, by formulating the fairness-accuracy
tradeoff as a relaxed fair regression problem, we derive a corresponding
regression function that can be implemented efficiently and provides
interpretable tradeoffs. More importantly, under some mild assumptions, the
proposed method can be applied to regression problems with a covariance-based
notion of fairness. Experimental results on benchmark datasets show the
proposed methods achieve competitive and even superior performance compared
with several state-of-the-art methods.
Related papers
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Demographic parity in regression and classification within the unawareness framework [8.057006406834466]
We characterize the optimal fair regression function when minimizing the quadratic loss.
We also study the connection between optimal fair cost-sensitive classification, and optimal fair regression.
arXiv Detail & Related papers (2024-09-04T06:43:17Z) - Fair learning with Wasserstein barycenters for non-decomposable
performance measures [8.508198765617198]
We show that maximizing accuracy under the demographic parity constraint is equivalent to solving a corresponding regression problem.
We extend this result to linear-fractional classification measures (e.g., $rm F$-score, AM measure, balanced accuracy, etc.)
arXiv Detail & Related papers (2022-09-01T13:06:43Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Achieving Fairness with a Simple Ridge Penalty [0.0]
We propose an alternative, more flexible approach to this task that enforces a user-defined level fairness constraint.
Our proposal produces three limitations of the former approach.
arXiv Detail & Related papers (2021-05-18T15:43:57Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Support estimation in high-dimensional heteroscedastic mean regression [2.28438857884398]
We consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors.
We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem.
For the resulting estimator we show sign-consistency and optimal rates of convergence in the $ell_infty$ norm.
arXiv Detail & Related papers (2020-11-03T09:46:31Z) - Fair Regression with Wasserstein Barycenters [39.818025466204055]
We study the problem of learning a real-valued function that satisfies the Demographic Parity constraint.
It demands the distribution of the predicted output to be independent of the sensitive attribute.
We establish a connection between fair regression and optimal transport theory, based on which we derive a close form expression for the optimal fair predictor.
arXiv Detail & Related papers (2020-06-12T16:10:41Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.