TIGTEC : Token Importance Guided TExt Counterfactuals
- URL: http://arxiv.org/abs/2304.12425v1
- Date: Mon, 24 Apr 2023 20:11:58 GMT
- Title: TIGTEC : Token Importance Guided TExt Counterfactuals
- Authors: Milan Bhan and Jean-Noel Vittaut and Nicolas Chesneau and Marie-Jeanne
Lesot
- Abstract summary: This paper proposes TIGTEC, an efficient and modular method for generating sparse, plausible and diverse counterfactual explanations.
A new attention-based local feature importance is proposed.
Experiments show the relevance of TIGTEC in terms of success rate, sparsity, diversity and plausibility.
- Score: 1.1642121991499805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual examples explain a prediction by highlighting changes of
instance that flip the outcome of a classifier. This paper proposes TIGTEC, an
efficient and modular method for generating sparse, plausible and diverse
counterfactual explanations for textual data. TIGTEC is a text editing
heuristic that targets and modifies words with high contribution using local
feature importance. A new attention-based local feature importance is proposed.
Counterfactual candidates are generated and assessed with a cost function
integrating semantic distance, while the solution space is efficiently explored
in a beam search fashion. The conducted experiments show the relevance of
TIGTEC in terms of success rate, sparsity, diversity and plausibility. This
method can be used in both model-specific or model-agnostic way, which makes it
very convenient for generating counterfactual explanations.
Related papers
- Visual Prompting for Generalized Few-shot Segmentation: A Multi-scale Approach [29.735863112700358]
We study the effectiveness of prompting a transformer-decoder with learned visual prompts for the generalized few-shot segmentation (GFSS) task.
Our goal is to achieve strong performance not only on novel categories with limited examples, but also to retain performance on base categories.
We introduce a unidirectional causal attention mechanism between the novel prompts, learned with limited examples, and the base prompts, learned with abundant data.
arXiv Detail & Related papers (2024-04-17T20:35:00Z) - Gradable ChatGPT Translation Evaluation [7.697698018200632]
ChatGPT, as a language model based on large-scale pre-training, has a profound influence on the domain of machine translation.
The design of the translation prompt emerges as a key aspect that can wield influence over factors such as the style, precision and accuracy of the translation to a certain extent.
This paper proposes a generic taxonomy, which defines gradable translation prompts in terms of expression type, translation style, POS information and explicit statement.
arXiv Detail & Related papers (2024-01-18T13:58:10Z) - Unifying Structure and Language Semantic for Efficient Contrastive
Knowledge Graph Completion with Structured Entity Anchors [0.3913403111891026]
The goal of knowledge graph completion (KGC) is to predict missing links in a KG using trained facts that are already known.
We propose a novel method to effectively unify structure information and language semantics without losing the power of inductive reasoning.
arXiv Detail & Related papers (2023-11-07T11:17:55Z) - ChatGraph: Interpretable Text Classification by Converting ChatGPT
Knowledge to Graphs [54.48467003509595]
ChatGPT has shown superior performance in various natural language processing (NLP) tasks.
We propose a novel framework that leverages the power of ChatGPT for specific tasks, such as text classification.
Our method provides a more transparent decision-making process compared with previous text classification methods.
arXiv Detail & Related papers (2023-05-03T19:57:43Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - AugGPT: Leveraging ChatGPT for Text Data Augmentation [59.76140039943385]
We propose a text data augmentation approach based on ChatGPT (named AugGPT)
AugGPT rephrases each sentence in the training samples into multiple conceptually similar but semantically different samples.
Experiment results on few-shot learning text classification tasks show the superior performance of the proposed AugGPT approach.
arXiv Detail & Related papers (2023-02-25T06:58:16Z) - Type-aware Embeddings for Multi-Hop Reasoning over Knowledge Graphs [18.56742938427262]
Multi-hop reasoning over real-life knowledge graphs (KGs) is a highly challenging problem.
To address this problem, it has been recently introduced a promising approach based on jointly embedding logical queries and KGs.
We propose a novel TypE-aware Message Passing (TEMP) model, which enhances the entity and relation representations in queries.
arXiv Detail & Related papers (2022-05-02T10:05:13Z) - Fine-Grained Visual Entailment [51.66881737644983]
We propose an extension of this task, where the goal is to predict the logical relationship of fine-grained knowledge elements within a piece of text to an image.
Unlike prior work, our method is inherently explainable and makes logical predictions at different levels of granularity.
We evaluate our method on a new dataset of manually annotated knowledge elements and show that our method achieves 68.18% accuracy at this challenging task.
arXiv Detail & Related papers (2022-03-29T16:09:38Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.