EVADE: LLM-Based Explanation Generation and Validation for Error Detection in NLI
- URL: http://arxiv.org/abs/2511.08949v1
- Date: Thu, 13 Nov 2025 01:20:37 GMT
- Title: EVADE: LLM-Based Explanation Generation and Validation for Error Detection in NLI
- Authors: Longfei Zuo, Barbara Plank, Siyao Peng,
- Abstract summary: EVADE is a framework for generating and validating explanations to detect errors using large language models.<n>HLV arises when multiple labels are valid for the same instance, making it difficult to separate annotation errors from plausible variation.
- Score: 36.91800117379075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-quality datasets are critical for training and evaluating reliable NLP models. In tasks like natural language inference (NLI), human label variation (HLV) arises when multiple labels are valid for the same instance, making it difficult to separate annotation errors from plausible variation. An earlier framework VARIERR (Weber-Genzel et al., 2024) asks multiple annotators to explain their label decisions in the first round and flag errors via validity judgments in the second round. However, conducting two rounds of manual annotation is costly and may limit the coverage of plausible labels or explanations. Our study proposes a new framework, EVADE, for generating and validating explanations to detect errors using large language models (LLMs). We perform a comprehensive analysis comparing human- and LLM-detected errors for NLI across distribution comparison, validation overlap, and impact on model fine-tuning. Our experiments demonstrate that LLM validation refines generated explanation distributions to more closely align with human annotations, and that removing LLM-detected errors from training data yields improvements in fine-tuning performance than removing errors identified by human annotators. This highlights the potential to scale error detection, reducing human effort while improving dataset quality under label variation.
Related papers
- Human-Corrected Labels Learning: Enhancing Labels Quality via Human Correction of VLMs Discrepancies [6.58446551781724]
We propose Human-Corrected Labels (HCLs), a novel setting that efficient human correction for VLM-generated noisy labels.<n>HCLs deploys human correction only for instances with VLM discrepancies, achieving both higher-quality annotations and reduced labor costs.<n>Our approach achieves superior classification performance and is robust to label noise, validating the effectiveness of HCL in practical weak supervision scenarios.
arXiv Detail & Related papers (2025-11-12T07:38:19Z) - Towards Automated Error Discovery: A Study in Conversational AI [48.735443116662026]
We introduce Automated Error Discovery, a framework for detecting and defining errors in conversational AI.<n>We also propose SEEED (Soft Clustering Extended-Based Error Detection), as an encoder-based approach to its implementation.
arXiv Detail & Related papers (2025-09-13T14:53:22Z) - ZeroED: Hybrid Zero-shot Error Detection through Large Language Model Reasoning [45.352592886478774]
We propose ZeroED, a novel hybrid zero-shot error detection framework.<n>ZeroED operates in four steps, i.e., feature representation, error labeling, training data construction, and detector training.<n>Experiments show ZeroED substantially outperforms state-of-the-art methods by a maximum 30% improvement in F1 score and up to 90% token cost reduction.
arXiv Detail & Related papers (2025-04-06T10:28:41Z) - Tgea: An error-annotated dataset and benchmark tasks for text generation from pretrained language models [57.758735361535486]
TGEA is an error-annotated dataset for text generation from pretrained language models (PLMs)<n>We create an error taxonomy to cover 24 types of errors occurring in PLM-generated sentences.<n>This is the first dataset with comprehensive annotations for PLM-generated texts.
arXiv Detail & Related papers (2025-03-06T09:14:02Z) - Error Classification of Large Language Models on Math Word Problems: A Dynamically Adaptive Framework [79.40678802098026]
Math Word Problems serve as a crucial benchmark for evaluating Large Language Models' reasoning abilities.<n>Current error classification methods rely on static and predefined categories.<n>We propose Error-Aware Prompting (EAP) that incorporates common error patterns as explicit guidance.
arXiv Detail & Related papers (2025-01-26T16:17:57Z) - To Err Is Human; To Annotate, SILICON? Reducing Measurement Error in LLM Annotation [11.470318058523466]
Large Language Models (LLMs) promise a cost-effective scalable alternative to human annotation.<n>We develop the SILICON methodology to systematically reduce measurement error from LLM annotation.<n>Our evidence indicates that reducing each error source is necessary, and that SILICON supports rigorous annotation in management research.
arXiv Detail & Related papers (2024-12-19T02:21:41Z) - Are LLMs Better than Reported? Detecting Label Errors and Mitigating Their Effect on Model Performance [28.524573212179124]
Large language models (LLMs) offer new opportunities to enhance the annotation process.<n>We compare expert, crowd-sourced, and LLM-based annotations in terms of the agreement, label quality, and efficiency.<n>Our findings reveal a substantial number of label errors, which, when corrected, a significant upward shift in reported model performance.
arXiv Detail & Related papers (2024-10-24T16:27:03Z) - Subtle Errors in Reasoning: Preference Learning via Error-injected Self-editing [59.405145971637204]
We propose a novel preference learning framework called eRror-Injected Self-Editing (RISE)<n>RISE injects predefined subtle errors into pivotal tokens in reasoning or steps to construct hard pairs for error mitigation.<n>Experiments validate the effectiveness of RISE, with preference learning on Qwen2-7B-Instruct yielding notable improvements of 3.0% on GSM8K and 7.9% on MATH with only 4.5K training samples.
arXiv Detail & Related papers (2024-10-09T07:43:38Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Towards Fine-Grained Information: Identifying the Type and Location of
Translation Errors [80.22825549235556]
Existing approaches can not synchronously consider error position and type.
We build an FG-TED model to predict the textbf addition and textbfomission errors.
Experiments show that our model can identify both error type and position concurrently, and gives state-of-the-art results.
arXiv Detail & Related papers (2023-02-17T16:20:33Z) - Contrastive Error Attribution for Finetuned Language Models [35.80256755393739]
noisy and misannotated data is a core cause of hallucinations and unfaithful outputs in Natural Language Generation (NLG) tasks.
We introduce a framework to identify and remove low-quality training instances that lead to undesirable outputs.
We show that existing approaches for error tracing, such as gradient-based influence measures, do not perform reliably for detecting faithfulness errors.
arXiv Detail & Related papers (2022-12-21T02:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.