Fairness-Aware Insurance Pricing: A Multi-Objective Optimization Approach
- URL: http://arxiv.org/abs/2512.24747v1
- Date: Wed, 31 Dec 2025 09:42:03 GMT
- Title: Fairness-Aware Insurance Pricing: A Multi-Objective Optimization Approach
- Authors: Tim J. Boonen, Xinyue Fan, Zixiao Quan,
- Abstract summary: Machine learning improves predictive accuracy in insurance pricing but exacerbates trade-offs between competing fairness criteria across different discrimination measures.<n>We propose a novel multi-objective optimization framework that jointly optimize all four criteria via the Non-dominated Sorting Genetic Algorithm II (NSGA-II)<n>Our results show that XGBoost outperforms GLM in accuracy but amplifies fairness disparities; the Orthogonal model excels in group fairness, while Synthetic Control leads in individual and counterfactual fairness.
- Score: 1.529342790344802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning improves predictive accuracy in insurance pricing but exacerbates trade-offs between competing fairness criteria across different discrimination measures, challenging regulators and insurers to reconcile profitability with equitable outcomes. While existing fairness-aware models offer partial solutions under GLM and XGBoost estimation methods, they remain constrained by single-objective optimization, failing to holistically navigate a conflicting landscape of accuracy, group fairness, individual fairness, and counterfactual fairness. To address this, we propose a novel multi-objective optimization framework that jointly optimizes all four criteria via the Non-dominated Sorting Genetic Algorithm II (NSGA-II), generating a diverse Pareto front of trade-off solutions. We use a specific selection mechanism to extract a premium on this front. Our results show that XGBoost outperforms GLM in accuracy but amplifies fairness disparities; the Orthogonal model excels in group fairness, while Synthetic Control leads in individual and counterfactual fairness. Our method consistently achieves a balanced compromise, outperforming single-model approaches.
Related papers
- Fairness Aware Reward Optimization [78.85867531002346]
We introduce Fairness Aware Reward Optimization (Faro), an in-processing framework that trains reward models under demographic parity, equalized odds, or counterfactual fairness constraints.<n>We provide the first theoretical analysis of reward-level fairness in LLM alignment.<n>Faro significantly reduces bias and harmful generations while maintaining or improving model quality.
arXiv Detail & Related papers (2026-02-08T03:35:49Z) - FairRF: Multi-Objective Search for Single and Intersectional Software Fairness [6.155605380087007]
We introduce FairRF, a novel approach based on multi-objective evolutionary search to optimise fairness and effectiveness in classification tasks.<n>We conduct an extensive empirical evaluation of FairRF against 26 different baselines in 11 different scenarios using five effectiveness and three fairness metrics.
arXiv Detail & Related papers (2026-01-12T13:42:45Z) - APFEx: Adaptive Pareto Front Explorer for Intersectional Fairness [16.993547305381327]
We introduce APFEx, the first framework to explicitly model intersectional fairness as a joint optimization problem.<n>APFEx combines adaptive multi-objectives, gradient weighting, and exploration strategies to navigate fairness-accuracy trade-offs.<n>Experiments on four real-world datasets demonstrate APFEx's superiority, reducing fairness violations while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-09-17T11:13:22Z) - ConfPO: Exploiting Policy Model Confidence for Critical Token Selection in Preference Optimization [48.50761200321113]
We introduce ConfPO, a method for preference learning in Large Language Models (LLMs)<n>It identifies and optimize preference-critical tokens based solely on the training policy's confidence, without requiring any auxiliary models or compute.<n> Experimental results on challenging alignment benchmarks, including AlpacaEval 2 and Arena-Hard, demonstrate that ConfPO consistently outperforms uniform DAAs.
arXiv Detail & Related papers (2025-06-10T11:54:22Z) - FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [23.38141950440522]
We propose a controllable federated group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>We show that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - Arbitrariness Lies Beyond the Fairness-Accuracy Frontier [3.383670923637875]
We show that state-of-the-art fairness interventions can mask high predictive multiplicity behind favorable group fairness and accuracy metrics.
We propose an ensemble algorithm applicable to any fairness intervention that provably ensures more consistent predictions.
arXiv Detail & Related papers (2023-06-15T18:15:46Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - FACT: A Diagnostic for Group Fairness Trade-offs [23.358566041117083]
Group fairness is a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes.
We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness.
arXiv Detail & Related papers (2020-04-07T14:15:51Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.