Concolic Testing on Individual Fairness of Neural Network Models
- URL: http://arxiv.org/abs/2509.06864v1
- Date: Mon, 08 Sep 2025 16:31:14 GMT
- Title: Concolic Testing on Individual Fairness of Neural Network Models
- Authors: Ming-I Huang, Chih-Duo Hong, Fang Yu,
- Abstract summary: PyFair is a formal framework for evaluating and verifying individual fairness of Deep Neural Networks (DNNs)<n>Our key innovation is a dual network architecture that enables comprehensive fairness assessments and provides completeness guarantees for certain network types.
- Score: 0.754900594011865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces PyFair, a formal framework for evaluating and verifying individual fairness of Deep Neural Networks (DNNs). By adapting the concolic testing tool PyCT, we generate fairness-specific path constraints to systematically explore DNN behaviors. Our key innovation is a dual network architecture that enables comprehensive fairness assessments and provides completeness guarantees for certain network types. We evaluate PyFair on 25 benchmark models, including those enhanced by existing bias mitigation techniques. Results demonstrate PyFair's efficacy in detecting discriminatory instances and verifying fairness, while also revealing scalability challenges for complex models. This work advances algorithmic fairness in critical domains by offering a rigorous, systematic method for fairness testing and verification of pre-trained DNNs.
Related papers
- Fake it till You Make it: Reward Modeling as Discriminative Prediction [49.31309674007382]
GAN-RM is an efficient reward modeling framework that eliminates manual preference annotation and explicit quality dimension engineering.<n>Our method trains the reward model through discrimination between a small set of representative, unpaired target samples.<n>Experiments demonstrate our GAN-RM's effectiveness across multiple key applications.
arXiv Detail & Related papers (2025-06-16T17:59:40Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.<n>We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.<n>We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Learning Fairer Representations with FairVIC [0.0]
Mitigating bias in automated decision-making systems is a critical challenge due to nuanced definitions of fairness and dataset-specific biases.<n>We introduce FairVIC, an innovative approach that enhances fairness in neural networks by integrating variance, invariance, and covariance terms into the loss function during training.<n>We evaluate FairVIC against comparable bias mitigation techniques on benchmark datasets, considering both group and individual fairness, and conduct an ablation study on the accuracy-fairness trade-off.
arXiv Detail & Related papers (2024-04-28T10:10:21Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification with Limited Sensitive Information Leakage [1.5438758943381854]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.<n>Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairify: Fairness Verification of Neural Networks [7.673007415383724]
We propose Fairify, an approach to verify individual fairness property in neural network (NN) models.
Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample.
We evaluated Fairify on 25 real-world neural networks collected from four different sources.
arXiv Detail & Related papers (2022-12-08T23:31:06Z) - FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms
for Neural Networks [9.967054059014691]
We study the problem of verifying, training, and guaranteeing individual fairness of neural network models.
A popular approach for enforcing fairness is to translate a fairness notion into constraints over the parameters of the model.
We develop a counterexample-guided post-processing technique to provably enforce fairness constraints at prediction time.
arXiv Detail & Related papers (2022-06-01T15:06:11Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Probabilistic Verification of Neural Networks Against Group Fairness [21.158245095699456]
We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
arXiv Detail & Related papers (2021-07-18T04:34:31Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.