Achieving Fairness with a Simple Ridge Penalty
- URL: http://arxiv.org/abs/2105.13817v1
- Date: Tue, 18 May 2021 15:43:57 GMT
- Title: Achieving Fairness with a Simple Ridge Penalty
- Authors: Marco Scutari and Manuel Proissl
- Abstract summary: We propose an alternative, more flexible approach to this task that enforces a user-defined level fairness constraint.
Our proposal produces three limitations of the former approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating a fair linear regression model subject to a user-defined level of
fairness can be achieved by solving a non-convex quadratic programming
optimisation problem with quadratic constraints. In this work we propose an
alternative, more flexible approach to this task that enforces a user-defined
level of fairness by means of a ridge penalty. Our proposal addresses three
limitations of the former approach: it produces regression coefficient
estimates that are more intuitive to interpret; it is mathematically simpler,
with a solution that is partly in closed form; and it is easier to extend
beyond linear regression. We evaluate both approaches empirically on five
different data sets, and we find that our proposal provides better goodness of
fit and better predictive accuracy while being equally effective at achieving
the desired fairness level. In addition we highlight a source of bias in the
original experimental evaluation of the non-convex quadratic approach, and we
discuss how our proposal can be extended to a wide range of models.
Related papers
- Alpha and Prejudice: Improving $α$-sized Worst-case Fairness via Intrinsic Reweighting [34.954141077528334]
Worst-case fairness with off-the-shelf demographics group achieves parity by maximizing the model utility of the worst-off group.
Recent advances have reframed this learning problem by introducing the lower bound of minimal partition ratio.
arXiv Detail & Related papers (2024-11-05T13:04:05Z) - Demographic parity in regression and classification within the unawareness framework [8.057006406834466]
We characterize the optimal fair regression function when minimizing the quadratic loss.
We also study the connection between optimal fair cost-sensitive classification, and optimal fair regression.
arXiv Detail & Related papers (2024-09-04T06:43:17Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Domain Generalization without Excess Empirical Risk [83.26052467843725]
A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty.
We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization.
We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk.
arXiv Detail & Related papers (2023-08-30T08:46:46Z) - Mean Parity Fair Regression in RKHS [43.98593032593897]
We study the fair regression problem under the notion of Mean Parity (MP) fairness.
We address this problem by leveraging reproducing kernel Hilbert space (RKHS)
We derive a corresponding regression function that can be implemented efficiently and provides interpretable tradeoffs.
arXiv Detail & Related papers (2023-02-21T02:44:50Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Online and Distribution-Free Robustness: Regression and Contextual
Bandits with Huber Contamination [29.85468294601847]
We revisit two classic high-dimensional online learning problems, namely linear regression and contextual bandits.
We show that our algorithms succeed where conventional methods fail.
arXiv Detail & Related papers (2020-10-08T17:59:05Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.