Explanations for Monotonic Classifiers
- URL: http://arxiv.org/abs/2106.00154v1
- Date: Tue, 1 Jun 2021 00:14:12 GMT
- Title: Explanations for Monotonic Classifiers
- Authors: Joao Marques-Silva, Thomas Gerspacher, Martin Cooper, Alexey Ignatiev,
Nina Narodytska
- Abstract summary: In many classification tasks there is a requirement of monotonicity.
Despite comprehensive efforts on learning monotonic classifiers, dedicated approaches for explaining monotonics are scarce.
This paper describes novel algorithms for the computation of one formal explanation of a black-box monotonic classifier.
The paper presents a practically efficient model-agnostic algorithm for enumerating formal explanations.
- Score: 26.044285532808075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many classification tasks there is a requirement of monotonicity.
Concretely, if all else remains constant, increasing (resp. decreasing) the
value of one or more features must not decrease (resp. increase) the value of
the prediction. Despite comprehensive efforts on learning monotonic
classifiers, dedicated approaches for explaining monotonic classifiers are
scarce and classifier-specific. This paper describes novel algorithms for the
computation of one formal explanation of a (black-box) monotonic classifier.
These novel algorithms are polynomial in the run time complexity of the
classifier and the number of features. Furthermore, the paper presents a
practically efficient model-agnostic algorithm for enumerating formal
explanations.
Related papers
- One-for-many Counterfactual Explanations by Column Generation [10.722820966396192]
We consider the problem of generating a set of counterfactual explanations for a group of instances.
For the first time, we solve the problem of minimizing the number of explanations needed to explain all the instances.
A novel column generation framework is developed to efficiently search for the explanations.
arXiv Detail & Related papers (2024-02-12T10:03:31Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - A Multi-Class SWAP-Test Classifier [0.0]
This work presents the first multi-class SWAP-Test classifier inspired by its binary predecessor and the use of label states in recent work.
In contrast to previous work, the number of qubits required, the measurement strategy, and the topology of the circuits used is invariant to the number of classes.
Both analytical results and numerical simulations show that this classifier is not only effective when applied to diverse classification problems but also robust to certain conditions of noise.
arXiv Detail & Related papers (2023-02-06T18:31:43Z) - An Upper Bound for the Distribution Overlap Index and Its Applications [18.481370450591317]
This paper proposes an easy-to-compute upper bound for the overlap index between two probability distributions.
The proposed bound shows its value in one-class classification and domain shift analysis.
Our work shows significant promise toward broadening the applications of overlap-based metrics.
arXiv Detail & Related papers (2022-12-16T20:02:03Z) - Class-Specific Explainability for Deep Time Series Classifiers [6.566615606042994]
We study the open problem of class-specific explainability for deep time series classifiers.
We design a novel explainability method, DEMUX, which learns saliency maps for explaining deep multi-class time series classifiers.
Our experimental study demonstrates that DEMUX outperforms nine state-of-the-art alternatives on five popular datasets.
arXiv Detail & Related papers (2022-10-11T12:37:15Z) - Soft-margin classification of object manifolds [0.0]
A neural population responding to multiple appearances of a single object defines a manifold in the neural response space.
The ability to classify such manifold is of interest, as object recognition and other computational tasks require a response that is insensitive to variability within a manifold.
Soft-margin classifiers are a larger class of algorithms and provide an additional regularization parameter used in applications to optimize performance outside the training set.
arXiv Detail & Related papers (2022-03-14T12:23:36Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z) - Exact Hard Monotonic Attention for Character-Level Transduction [76.66797368985453]
We show that neural sequence-to-sequence models that use non-monotonic soft attention often outperform popular monotonic models.
We develop a hard attention sequence-to-sequence model that enforces strict monotonicity and learns a latent alignment jointly while learning to transduce.
arXiv Detail & Related papers (2019-05-15T17:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.