Weakly Supervised Detection of Hallucinations in LLM Activations
- URL: http://arxiv.org/abs/2312.02798v1
- Date: Tue, 5 Dec 2023 14:35:11 GMT
- Title: Weakly Supervised Detection of Hallucinations in LLM Activations
- Authors: Miriam Rateike, Celia Cintas, John Wamburu, Tanya Akumu, Skyler
Speakman
- Abstract summary: We propose an auditing method to identify whether a large language model encodes hallucinations in its internal states.
We introduce a weakly supervised auditing technique using a subset scanning approach to detect anomalous patterns.
Our results confirm prior findings of BERT's limited internal capacity for encoding hallucinations, while OPT appears capable of encoding hallucination information internally.
- Score: 4.017261947780098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an auditing method to identify whether a large language model
(LLM) encodes patterns such as hallucinations in its internal states, which may
propagate to downstream tasks. We introduce a weakly supervised auditing
technique using a subset scanning approach to detect anomalous patterns in LLM
activations from pre-trained models. Importantly, our method does not need
knowledge of the type of patterns a-priori. Instead, it relies on a reference
dataset devoid of anomalies during testing. Further, our approach enables the
identification of pivotal nodes responsible for encoding these patterns, which
may offer crucial insights for fine-tuning specific sub-networks for bias
mitigation. We introduce two new scanning methods to handle LLM activations for
anomalous sentences that may deviate from the expected distribution in either
direction. Our results confirm prior findings of BERT's limited internal
capacity for encoding hallucinations, while OPT appears capable of encoding
hallucination information internally. Importantly, our scanning approach,
without prior exposure to false statements, performs comparably to a fully
supervised out-of-distribution classifier.
Related papers
- Feeding LLM Annotations to BERT Classifiers at Your Own Risk [14.533304890042361]
Using LLM-generated labels to fine-tune smaller encoder-only models for text classification has gained popularity in various settings.
We demonstrate how the perennial curse of training on synthetic data manifests itself in this specific setup.
Compared to models trained on gold labels, we observe not only the expected performance degradation in accuracy and F1 score, but also increased instability across training runs and premature performance plateaus.
arXiv Detail & Related papers (2025-04-21T20:54:55Z) - Learning on LLM Output Signatures for gray-box LLM Behavior Analysis [52.81120759532526]
Large Language Models (LLMs) have achieved widespread adoption, yet our understanding of their behavior remains limited.
We develop a transformer-based approach to process that theoretically guarantees approximation of existing techniques.
Our approach achieves superior performance on hallucination and data contamination detection in gray-box settings.
arXiv Detail & Related papers (2025-03-18T09:04:37Z) - Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Large Language Models for Anomaly Detection in Computational Workflows: from Supervised Fine-Tuning to In-Context Learning [9.601067780210006]
This paper leverages large language models (LLMs) for workflow anomaly detection by exploiting their ability to learn complex data patterns.
Two approaches are investigated: 1) supervised fine-tuning (SFT), where pre-trained LLMs are fine-tuned on labeled data for sentence classification to identify anomalies, and 2) in-context learning (ICL) where prompts containing task descriptions and examples guide LLMs in few-shot anomaly detection without fine-tuning.
arXiv Detail & Related papers (2024-07-24T16:33:04Z) - Anomaly Detection of Tabular Data Using LLMs [54.470648484612866]
We show that pre-trained large language models (LLMs) are zero-shot batch-level anomaly detectors.
We propose an end-to-end fine-tuning strategy to bring out the potential of LLMs in detecting real anomalies.
arXiv Detail & Related papers (2024-06-24T04:17:03Z) - LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop [7.77005079649294]
An effective method is to probe the Large Language Models using different versions of the same question.
To operationalize this auditing method at scale, we need an approach to create those probes reliably and automatically.
We propose the LLMAuditor framework, where one uses a different LLM along with human-in-the-loop (HIL)
This approach offers verifiability and transparency, while avoiding circular reliance on the same LLM.
arXiv Detail & Related papers (2024-02-14T17:49:31Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - A New Benchmark and Reverse Validation Method for Passage-level
Hallucination Detection [63.56136319976554]
Large Language Models (LLMs) generate hallucinations, which can cause significant damage when deployed for mission-critical tasks.
We propose a self-check approach based on reverse validation to detect factual errors automatically in a zero-resource fashion.
We empirically evaluate our method and existing zero-resource detection methods on two datasets.
arXiv Detail & Related papers (2023-10-10T10:14:59Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Detecting Hallucinated Content in Conditional Neural Sequence Generation [165.68948078624499]
We propose a task to predict whether each token in the output sequence is hallucinated (not contained in the input)
We also introduce a method for learning to detect hallucinations using pretrained language models fine tuned on synthetic data.
arXiv Detail & Related papers (2020-11-05T00:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.