Provably Training Neural Network Classifiers under Fairness Constraints
- URL: http://arxiv.org/abs/2012.15274v1
- Date: Wed, 30 Dec 2020 18:46:50 GMT
- Title: Provably Training Neural Network Classifiers under Fairness Constraints
- Authors: You-Lin Chen, Zhaoran Wang, Mladen Kolar
- Abstract summary: We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
- Score: 70.64045590577318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a classifier under fairness constraints has gotten increasing
attention in the machine learning community thanks to moral, legal, and
business reasons. However, several recent works addressing algorithmic fairness
have only focused on simple models such as logistic regression or support
vector machines due to non-convex and non-differentiable fairness criteria
across protected groups, such as race or gender. Neural networks, the most
widely used models for classification nowadays, are precluded and lack
theoretical guarantees. This paper aims to fill this missing but crucial part
of the literature of algorithmic fairness for neural networks. In particular,
we show that overparametrized neural networks could meet the fairness
constraints. The key ingredient of building a fair neural network classifier is
establishing no-regret analysis for neural networks in the overparameterization
regime, which may be of independent interest in the online learning of neural
networks and related applications.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Last-Layer Fairness Fine-tuning is Simple and Effective for Neural
Networks [36.182644157139144]
We develop a framework to train fair neural networks in an efficient and inexpensive way.
Last-layer fine-tuning alone can effectively promote fairness in deep neural networks.
arXiv Detail & Related papers (2023-04-08T06:49:15Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Interpretable part-whole hierarchies and conceptual-semantic
relationships in neural networks [4.153804257347222]
We present Agglomerator, a framework capable of providing a representation of part-whole hierarchies from visual cues.
We evaluate our method on common datasets, such as SmallNORB, MNIST, FashionMNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-03-07T10:56:13Z) - Probabilistic Verification of Neural Networks Against Group Fairness [21.158245095699456]
We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
arXiv Detail & Related papers (2021-07-18T04:34:31Z) - SOCRATES: Towards a Unified Platform for Neural Network Analysis [7.318255652722096]
We aim to build a unified framework for developing techniques to analyze neural networks.
We develop a platform called SOCRATES which supports a standardized format for a variety of neural network models.
Experiment results show that our platform can handle a wide range of networks models and properties.
arXiv Detail & Related papers (2020-07-22T05:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.