Analyzing Semantic Faithfulness of Language Models via Input
Intervention on Question Answering
- URL: http://arxiv.org/abs/2212.10696v2
- Date: Thu, 30 Nov 2023 10:42:26 GMT
- Title: Analyzing Semantic Faithfulness of Language Models via Input
Intervention on Question Answering
- Authors: Akshay Chaturvedi, Swarnadeep Bhar, Soumadeep Saha, Utpal Garain,
Nicholas Asher
- Abstract summary: We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering.
We show that transformer models fail to be semantically faithful once we perform two semantic interventions: deletion intervention and negation intervention.
We propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin.
- Score: 4.799822253865053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformer-based language models have been shown to be highly effective for
several NLP tasks. In this paper, we consider three transformer models, BERT,
RoBERTa, and XLNet, in both small and large versions, and investigate how
faithful their representations are with respect to the semantic content of
texts. We formalize a notion of semantic faithfulness, in which the semantic
content of a text should causally figure in a model's inferences in question
answering. We then test this notion by observing a model's behavior on
answering questions about a story after performing two novel semantic
interventions: deletion intervention and negation intervention. While
transformer models achieve high performance on standard question answering
tasks, we show that they fail to be semantically faithful once we perform these
interventions for a significant number of cases (~50% for deletion
intervention, and ~20% drop in accuracy for negation intervention). We then
propose an intervention-based training regime that can mitigate the undesirable
effects for deletion intervention by a significant margin (from ~ 50% to ~6%).
We analyze the inner-workings of the models to better understand the
effectiveness of intervention-based training for deletion intervention. But we
show that this training does not attenuate other aspects of semantic
unfaithfulness such as the models' inability to deal with negation intervention
or to capture the predicate-argument structure of texts. We also test
InstructGPT, via prompting, for its ability to handle the two interventions and
to capture predicate-argument structure. While InstructGPT models do achieve
very high performance on predicate-argument structure task, they fail to
respond adequately to our deletion and negation interventions.
Related papers
- Counterfactual Generation from Language Models [64.55296662926919]
We show that counterfactual reasoning is conceptually distinct from interventions.
We propose a framework for generating true string counterfactuals.
Our experiments demonstrate that the approach produces meaningful counterfactuals.
arXiv Detail & Related papers (2024-11-11T17:57:30Z) - Towards Unifying Interpretability and Control: Evaluation via Intervention [25.4582941170387]
We propose intervention as a fundamental goal of interpretability and introduce success criteria to evaluate how well methods are able to control model behavior through interventions.
We extend four popular interpretability methods--sparse autoencoders, logit lens, tuned lens, and probing--into an abstract encoder-decoder framework.
We introduce two new evaluation metrics: intervention success rate and the coherence-intervention tradeoff, designed to measure the accuracy of explanations and their utility in controlling model behavior.
arXiv Detail & Related papers (2024-11-07T04:52:18Z) - Composable Interventions for Language Models [60.32695044723103]
Test-time interventions for language models can enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining.
But despite a flood of new methods, different types of interventions are largely developing independently.
We introduce composable interventions, a framework to study the effects of using multiple interventions on the same language models.
arXiv Detail & Related papers (2024-07-09T01:17:44Z) - Rethinking harmless refusals when fine-tuning foundation models [0.8571111167616167]
We investigate the degree to which fine-tuning in Large Language Models (LLMs) effectively mitigates versus merely conceals undesirable behavior.
We identify a pervasive phenomenon we term emphreason-based deception, where models either stop producing reasoning traces or produce seemingly ethical reasoning traces that belie the unethical nature of their final outputs.
arXiv Detail & Related papers (2024-06-27T22:08:22Z) - Estimating the Causal Effects of Natural Logic Features in Transformer-Based NLI Models [16.328341121232484]
We apply causal effect estimation strategies to measure the effect of context interventions.
We investigate robustness to irrelevant changes and sensitivity to impactful changes of Transformers.
arXiv Detail & Related papers (2024-04-03T10:22:35Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning [57.4036085386653]
We show that prompt-based models for sentence pair classification tasks still suffer from a common pitfall of adopting inferences based on lexical overlap.
We then show that adding a regularization that preserves pretraining weights is effective in mitigating this destructive tendency of few-shot finetuning.
arXiv Detail & Related papers (2021-09-09T10:10:29Z) - Paired Examples as Indirect Supervision in Latent Decision Models [109.76417071249945]
We introduce a way to leverage paired examples that provide stronger cues for learning latent decisions.
We apply our method to improve compositional question answering using neural module networks on the DROP dataset.
arXiv Detail & Related papers (2021-04-05T03:58:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.