Fair Supervised Learning Through Constraints on Smooth Nonconvex Unfairness-Measure Surrogates
- URL: http://arxiv.org/abs/2505.15788v1
- Date: Wed, 21 May 2025 17:41:06 GMT
- Title: Fair Supervised Learning Through Constraints on Smooth Nonconvex Unfairness-Measure Surrogates
- Authors: Zahra Khatti, Daniel P. Robinson, Frank E. Curtis,
- Abstract summary: A new strategy for fair supervised machine learning is proposed.<n>The main advantages of the proposed strategy as compared to others in the literature are as follows.
- Score: 6.516872951510096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A new strategy for fair supervised machine learning is proposed. The main advantages of the proposed strategy as compared to others in the literature are as follows. (a) We introduce a new smooth nonconvex surrogate to approximate the Heaviside functions involved in discontinuous unfairness measures. The surrogate is based on smoothing methods from the optimization literature, and is new for the fair supervised learning literature. The surrogate is a tight approximation which ensures the trained prediction models are fair, as opposed to other (e.g., convex) surrogates that can fail to lead to a fair prediction model in practice. (b) Rather than rely on regularizers (that lead to optimization problems that are difficult to solve) and corresponding regularization parameters (that can be expensive to tune), we propose a strategy that employs hard constraints so that specific tolerances for unfairness can be enforced without the complications associated with the use of regularization. (c)~Our proposed strategy readily allows for constraints on multiple (potentially conflicting) unfairness measures at the same time. Multiple measures can be considered with a regularization approach, but at the cost of having even more difficult optimization problems to solve and further expense for tuning. By contrast, through hard constraints, our strategy leads to optimization models that can be solved tractably with minimal tuning.
Related papers
- Conformal Mixed-Integer Constraint Learning with Feasibility Guarantees [0.3058340744328236]
Conformal Mixed-Integer Constraint Learning provides probabilistic feasibility guarantees for data-driven constraints in optimization problems.<n>We show that C-MICL consistently achieves target rates, maintains competitive objective performance, and significantly reduces computational cost compared to existing methods.
arXiv Detail & Related papers (2025-06-04T03:26:31Z) - Learning sparsity-promoting regularizers for linear inverse problems [5.812129569528998]
This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems.<n>We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution.
arXiv Detail & Related papers (2024-12-20T16:26:54Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.<n>To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.<n>Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - OptiGrad: A Fair and more Efficient Price Elasticity Optimization via a Gradient Based Learning [7.145413681946911]
This paper presents a novel approach to optimizing profit margins in non-life insurance markets through a gradient descent-based method.
It targets three key objectives: 1) maximizing profit margins, 2) ensuring conversion rates, and 3) enforcing fairness criteria such as demographic parity (DP)
arXiv Detail & Related papers (2024-04-16T04:21:59Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Towards Practical Non-Adversarial Distribution Matching [19.196082965354904]
Distribution matching can be used to learn invariant representations with applications in fairness and robustness.
Most prior works resort to adversarial matching methods but the resulting minimax problems are unstable and challenging to optimize.
We propose a non-adversarial VAE-based matching method that can be applied to any model pipeline.
arXiv Detail & Related papers (2023-10-30T16:05:46Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Learning (Local) Surrogate Loss Functions for Predict-Then-Optimize
Problems [58.954414264760956]
Decision-Focused Learning (DFL) is a paradigm for tailoring a predictive model to a downstream optimisation task.
We present an approach to learn faithful task-specific surrogates which (a) only requires access to a black-box oracle that can solve the optimisation problem and is thus generalizable, and (b) can be convex by gradients and so can be easily optimized over.
arXiv Detail & Related papers (2022-03-30T05:46:54Z) - A Surrogate Objective Framework for Prediction+Optimization with Soft
Constraints [29.962390392493507]
Decision-focused prediction approaches, such as SPO+ and direct optimization, have been proposed to fill this gap.
This paper proposes a novel analytically differentiable surrogate objective framework for real-world linear and semi-definite negative quadratic programming problems.
arXiv Detail & Related papers (2021-11-22T17:09:57Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z) - Teaching the Old Dog New Tricks: Supervised Learning with Constraints [18.88930622054883]
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems.
Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output.
Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver.
arXiv Detail & Related papers (2020-02-25T09:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.