Explanation-Guided Fairness Testing through Genetic Algorithm
- URL: http://arxiv.org/abs/2205.08335v1
- Date: Mon, 16 May 2022 02:40:48 GMT
- Title: Explanation-Guided Fairness Testing through Genetic Algorithm
- Authors: Ming Fan, Wenying Wei, Wuxia Jin, Zijiang Yang, Ting Liu
- Abstract summary: This work proposes ExpGA, an explanationguided fairness testing approach through a genetic algorithm (GA)
ExpGA employs the explanation results generated by interpretable methods to collect high-quality initial seeds.
It then adopts GA to search discriminatory sample candidates by optimizing a fitness value.
- Score: 18.642243829461158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fairness characteristic is a critical attribute of trusted AI systems. A
plethora of research has proposed diverse methods for individual fairness
testing. However, they are suffering from three major limitations, i.e., low
efficiency, low effectiveness, and model-specificity. This work proposes ExpGA,
an explanationguided fairness testing approach through a genetic algorithm
(GA). ExpGA employs the explanation results generated by interpretable methods
to collect high-quality initial seeds, which are prone to derive discriminatory
samples by slightly modifying feature values. ExpGA then adopts GA to search
discriminatory sample candidates by optimizing a fitness value. Benefiting from
this combination of explanation results and GA, ExpGA is both efficient and
effective to detect discriminatory individuals. Moreover, ExpGA only requires
prediction probabilities of the tested model, resulting in a better
generalization capability to various models. Experiments on multiple real-world
benchmarks, including tabular and text datasets, show that ExpGA presents
higher efficiency and effectiveness than four state-of-the-art approaches.
Related papers
- Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples [53.95282502030541]
Neural Network-based active learning (NAL) is a cost-effective data selection technique that utilizes neural networks to select and train on a small subset of samples.
We try to move one step forward by offering a unified explanation for the success of both query criteria-based NAL from a feature learning view.
arXiv Detail & Related papers (2024-06-06T10:38:01Z) - Feature Selection via Robust Weighted Score for High Dimensional Binary
Class-Imbalanced Gene Expression Data [1.2891210250935148]
A robust weighted score for unbalanced data (ROWSU) is proposed for selecting the most discriminative feature for high dimensional gene expression binary classification with class-imbalance problem.
The performance of the proposed ROWSU method is evaluated on $6$ gene expression datasets.
arXiv Detail & Related papers (2024-01-23T11:22:03Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - A GA-like Dynamic Probability Method With Mutual Information for Feature
Selection [1.290382979353427]
We propose a GA-like dynamic probability (GADP) method with mutual information.
As each gene's probability is independent, the chromosome variety in GADP is more notable than in traditional GA.
To verify our method's superiority, we evaluate our method under multiple conditions on 15 datasets.
arXiv Detail & Related papers (2022-10-21T13:30:01Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - ProBoost: a Boosting Method for Probabilistic Classifiers [55.970609838687864]
ProBoost is a new boosting algorithm for probabilistic classifiers.
It uses the uncertainty of each training sample to determine the most challenging/uncertain ones.
It produces a sequence that progressively focuses on the samples found to have the highest uncertainty.
arXiv Detail & Related papers (2022-09-04T12:49:20Z) - Efficient and accurate group testing via Belief Propagation: an
empirical study [5.706360286474043]
Group testing problem asks for efficient pooling schemes and algorithms.
The goal is to accurately identify the infected samples while conducting the least possible number of tests.
We suggest a new test design that significantly increases the accuracy of the results.
arXiv Detail & Related papers (2021-05-13T10:52:46Z) - GA for feature selection of EEG heterogeneous data [0.0]
We propose a genetic algorithm (GA) for feature selection that can be used with a supervised or unsupervised approach.
Our proposal considers three different fitness functions without relying on expert knowledge.
The proposed GA, based on a novel fitness function here presented, outperforms the benchmark when the two different datasets considered are merged together.
arXiv Detail & Related papers (2021-03-12T07:27:42Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - On the Performance of Metaheuristics: A Different Perspective [0.0]
We study some basic evolutionary and swam-intelligence metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based Optimization (TLBO) and Cuckoo Optimization algorithm (COA)
A large number of experiments have been conducted on 20 different optimization benchmark functions with different characteristics, and the results reveal to us some fundamental conclusions besides the following ranking order among these metaheuristics.
arXiv Detail & Related papers (2020-01-24T09:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.