fairlib: A Unified Framework for Assessing and Improving Classification
Fairness
- URL: http://arxiv.org/abs/2205.01876v1
- Date: Wed, 4 May 2022 03:50:23 GMT
- Title: fairlib: A Unified Framework for Assessing and Improving Classification
Fairness
- Authors: Xudong Han, Aili Shen, Yitong Li, Lea Frermann, Timothy Baldwin,
Trevor Cohn
- Abstract summary: fairlib is an open-source framework for assessing and improving classification fairness.
We implement 14 debiasing methods, including pre-processing, at-training-time, and post-processing approaches.
The built-in metrics cover the most commonly used fairness criterion and can be further generalized and customized for fairness evaluation.
- Score: 66.27822109651757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents fairlib, an open-source framework for assessing and
improving classification fairness. It provides a systematic framework for
quickly reproducing existing baseline models, developing new methods,
evaluating models with different metrics, and visualizing their results. Its
modularity and extensibility enable the framework to be used for diverse types
of inputs, including natural language, images, and audio. In detail, we
implement 14 debiasing methods, including pre-processing, at-training-time, and
post-processing approaches. The built-in metrics cover the most commonly used
fairness criterion and can be further generalized and customized for fairness
evaluation.
Related papers
- Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - A Similarity-based Framework for Classification Task [21.182406977328267]
Similarity-based method gives rise to a new class of methods for multi-label learning and also achieves promising performance.
We unite similarity-based learning and generalized linear models to achieve the best of both worlds.
arXiv Detail & Related papers (2022-03-05T06:39:50Z) - Set-valued classification -- overview via a unified framework [15.109906768606644]
Multi-class datasets can be extremely ambiguous and single-output predictions fail to deliver satisfactory performance.
By allowing predictors to predict a set of label candidates, set-valued classification offers a natural way to deal with this ambiguity.
We provide infinite sample optimal set-valued classification strategies and review a general plug-in principle to construct data-driven algorithms.
arXiv Detail & Related papers (2021-02-24T14:54:07Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - Unbiased Subdata Selection for Fair Classification: A Unified Framework
and Scalable Algorithms [0.8376091455761261]
We show that many classification models within this framework can be recast as mixed-integer convex programs.
We then show that in the proposed problem, when the classification outcomes, "unsolvable subdata selection," is strongly-solvable.
This motivates us to develop an iterative refining strategy (IRS) to solve the classification instances.
arXiv Detail & Related papers (2020-12-22T21:09:38Z) - Visual-Semantic Embedding Model Informed by Structured Knowledge [3.2734466030053175]
We propose a novel approach to improve a visual-semantic embedding model by incorporating concept representations captured from an external structured knowledge base.
We investigate its performance on image classification under both standard and zero-shot settings.
arXiv Detail & Related papers (2020-09-21T17:04:32Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.