Mean Parity Fair Regression in RKHS
- URL: http://arxiv.org/abs/2302.10409v1
- Date: Tue, 21 Feb 2023 02:44:50 GMT
- Title: Mean Parity Fair Regression in RKHS
- Authors: Shaokui Wei, Jiayin Liu, Bing Li, Hongyuan Zha
- Abstract summary: We study the fair regression problem under the notion of Mean Parity (MP) fairness.
We address this problem by leveraging reproducing kernel Hilbert space (RKHS)
We derive a corresponding regression function that can be implemented efficiently and provides interpretable tradeoffs.
- Score: 43.98593032593897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the fair regression problem under the notion of Mean Parity (MP)
fairness, which requires the conditional mean of the learned function output to
be constant with respect to the sensitive attributes. We address this problem
by leveraging reproducing kernel Hilbert space (RKHS) to construct the
functional space whose members are guaranteed to satisfy the fairness
constraints. The proposed functional space suggests a closed-form solution for
the fair regression problem that is naturally compatible with multiple
sensitive attributes. Furthermore, by formulating the fairness-accuracy
tradeoff as a relaxed fair regression problem, we derive a corresponding
regression function that can be implemented efficiently and provides
interpretable tradeoffs. More importantly, under some mild assumptions, the
proposed method can be applied to regression problems with a covariance-based
notion of fairness. Experimental results on benchmark datasets show the
proposed methods achieve competitive and even superior performance compared
with several state-of-the-art methods.
Related papers
- ACCon: Angle-Compensated Contrastive Regularizer for Deep Regression [28.491074229136014]
In deep regression, capturing the relationship among continuous labels in feature space is a fundamental challenge that has attracted increasing interest.
Existing approaches often rely on order-aware representation learning or distance-based weighting.
We propose an angle-compensated contrastive regularizer for deep regression, which adjusts the cosine distance between anchor and negative samples.
arXiv Detail & Related papers (2025-01-13T03:55:59Z) - RieszBoost: Gradient Boosting for Riesz Regression [49.737777802061984]
We propose a novel gradient boosting algorithm to directly estimate the Riesz representer without requiring its explicit analytical form.
We show that our algorithm performs on par with or better than indirect estimation techniques across a range of functionals.
arXiv Detail & Related papers (2025-01-08T23:04:32Z) - Fair and Accurate Regression: Strong Formulations and Algorithms [5.93858665501805]
This paper introduces mixed-integer optimization methods to solve regression problems that incorporate metrics.
We propose an exact formulation for training fair regression models.
Numerical experiments conducted on fair least squares and fair logistic regression problems show competitive statistical performance.
arXiv Detail & Related papers (2024-12-22T18:04:54Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
We study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation.
First, we derive a novel high-dimensional probability convergence guarantee that depends explicitly on the variance and holds under weak conditions.
We further establish refined high-dimensional Berry-Esseen bounds over the class of convex sets that guarantee faster rates than those in the literature.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Demographic parity in regression and classification within the unawareness framework [8.057006406834466]
We characterize the optimal fair regression function when minimizing the quadratic loss.
We also study the connection between optimal fair cost-sensitive classification, and optimal fair regression.
arXiv Detail & Related papers (2024-09-04T06:43:17Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Achieving Fairness with a Simple Ridge Penalty [0.0]
We propose an alternative, more flexible approach to this task that enforces a user-defined level fairness constraint.
Our proposal produces three limitations of the former approach.
arXiv Detail & Related papers (2021-05-18T15:43:57Z) - Support estimation in high-dimensional heteroscedastic mean regression [2.07180164747172]
We consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors.
We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem.
For the resulting estimator we show sign-consistency and optimal rates of convergence in the $ell_infty$ norm.
arXiv Detail & Related papers (2020-11-03T09:46:31Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.