How to address monotonicity for model risk management?
- URL: http://arxiv.org/abs/2305.00799v2
- Date: Sun, 24 Sep 2023 05:35:33 GMT
- Title: How to address monotonicity for model risk management?
- Authors: Dangxing Chen, Weicheng Ye
- Abstract summary: This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity.
As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models.
- Score: 1.0878040851638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the problem of establishing the accountability and
fairness of transparent machine learning models through monotonicity. Although
there have been numerous studies on individual monotonicity, pairwise
monotonicity is often overlooked in the existing literature. This paper studies
transparent neural networks in the presence of three types of monotonicity:
individual monotonicity, weak pairwise monotonicity, and strong pairwise
monotonicity. As a means of achieving monotonicity while maintaining
transparency, we propose the monotonic groves of neural additive models. As a
result of empirical examples, we demonstrate that monotonicity is often
violated in practice and that monotonic groves of neural additive models are
transparent, accountable, and fair.
Related papers
- Transferring Annotator- and Instance-dependent Transition Matrix for Learning from Crowds [88.06545572893455]
In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent.
We first model the mixture of noise patterns by all annotators, and then transfer this modeling to individual annotators.
Experiments confirm the superiority of the proposed approach on synthetic and real-world crowd-sourcing data.
arXiv Detail & Related papers (2023-06-05T13:43:29Z) - Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text
Generation via Concentrating Attention [85.5379146125199]
Powerful Transformer architectures have proven superior in generating high-quality sentences.
In this work, we find that sparser attention values in Transformer could improve diversity.
We introduce a novel attention regularization loss to control the sharpness of the attention distribution.
arXiv Detail & Related papers (2022-11-14T07:53:16Z) - On Biasing Transformer Attention Towards Monotonicity [20.205388243570003]
We introduce a monotonicity loss function that is compatible with standard attention mechanisms and test it on several sequence-to-sequence tasks.
Experiments show that we can achieve largely monotonic behavior.
General monotonicity does not benefit transformer multihead attention, however, we see isolated improvements when only a subset of heads is biased towards monotonic behavior.
arXiv Detail & Related papers (2021-04-08T17:42:05Z) - A study of latent monotonic attention variants [65.73442960456013]
End-to-end models reach state-of-the-art performance for speech recognition, but global soft attention is not monotonic.
We present a mathematically clean solution to introduce monotonicity, by introducing a new latent variable.
We show that our monotonic models perform as good as the global soft attention model.
arXiv Detail & Related papers (2021-03-30T22:35:56Z) - Certified Monotonic Neural Networks [15.537695725617576]
We propose to certify the monotonicity of the general piece-wise linear neural networks by solving a mixed integer linear programming problem.
Our approach does not require human-designed constraints on the weight space and also yields more accurate approximation.
arXiv Detail & Related papers (2020-11-20T04:58:13Z) - Contextuality scenarios arising from networks of stochastic processes [68.8204255655161]
An empirical model is said contextual if its distributions cannot be obtained marginalizing a joint distribution over X.
We present a different and classical source of contextual empirical models: the interaction among many processes.
The statistical behavior of the network in the long run makes the empirical model generically contextual and even strongly contextual.
arXiv Detail & Related papers (2020-06-22T16:57:52Z) - Counterexample-Guided Learning of Monotonic Neural Networks [32.73558242733049]
We focus on monotonicity constraints, which are common and require that the function's output increases with increasing values of specific input features.
We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time.
We also propose a technique to use monotonicity as an inductive bias for deep learning.
arXiv Detail & Related papers (2020-06-16T01:04:26Z) - Monotone operator equilibrium networks [97.86610752856987]
We develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ)
We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem.
We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.
arXiv Detail & Related papers (2020-06-15T17:57:31Z) - Exact Hard Monotonic Attention for Character-Level Transduction [76.66797368985453]
We show that neural sequence-to-sequence models that use non-monotonic soft attention often outperform popular monotonic models.
We develop a hard attention sequence-to-sequence model that enforces strict monotonicity and learns a latent alignment jointly while learning to transduce.
arXiv Detail & Related papers (2019-05-15T17:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.