A Graph Signal Processing Framework for Hallucination Detection in Large Language Models
- URL: http://arxiv.org/abs/2510.19117v1
- Date: Tue, 21 Oct 2025 22:35:48 GMT
- Title: A Graph Signal Processing Framework for Hallucination Detection in Large Language Models
- Authors: Valentin Noël,
- Abstract summary: We show that factual statements exhibit consistent "energy mountain" behavior with low-frequency convergence, while different hallucination types show distinct signatures.<n>A simple detector using spectral signatures achieves 88.75% accuracy versus 75% for perplexity-based baselines.<n>These findings indicate that spectral geometry may capture reasoning patterns and error behaviors, potentially offering a framework for detection in large language models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models achieve impressive results but distinguishing factual reasoning from hallucinations remains challenging. We propose a spectral analysis framework that models transformer layers as dynamic graphs induced by attention, with token embeddings as signals on these graphs. Through graph signal processing, we define diagnostics including Dirichlet energy, spectral entropy, and high-frequency energy ratios, with theoretical connections to computational stability. Experiments across GPT architectures suggest universal spectral patterns: factual statements exhibit consistent "energy mountain" behavior with low-frequency convergence, while different hallucination types show distinct signatures. Logical contradictions destabilize spectra with large effect sizes ($g>1.0$), semantic errors remain stable but show connectivity drift, and substitution hallucinations display intermediate perturbations. A simple detector using spectral signatures achieves 88.75% accuracy versus 75% for perplexity-based baselines, demonstrating practical utility. These findings indicate that spectral geometry may capture reasoning patterns and error behaviors, potentially offering a framework for hallucination detection in large language models.
Related papers
- HalluZig: Hallucination Detection using Zigzag Persistence [0.1687274452793636]
We introduce a new paradigm for hallucination detection by analyzing the dynamic topology of model's layer-wise attention.<n>Our core hypothesis is that factual and hallucinated generations exhibit distinct topological signatures.<n>We validate our framework, HalluZig, on multiple benchmarks, demonstrating that it outperforms strong baselines.
arXiv Detail & Related papers (2026-01-04T14:55:43Z) - Geometry of Reason: Spectral Signatures of Valid Mathematical Reasoning [0.0]
We present a training-free method for detecting valid mathematical reasoning in large language models through spectral analysis of attention patterns.<n>The method requires no training data, fine-tuning, or learned classifiers: a single threshold on a spectral metric suffices for high accuracy.<n>These findings establish spectral graph analysis as a principled framework for reasoning verification with immediate applications to hallucination detection and AI safety monitoring.
arXiv Detail & Related papers (2026-01-02T18:49:37Z) - Resolving Node Identifiability in Graph Neural Processes via Laplacian Spectral Encodings [9.343292907600913]
We provide theory for a Laplacian positional encoding that is invariant to eigenvector sign flips and to basis rotations within eigenspaces.<n>We prove that this encoding yields node identifiability from a constant number of observations and establish a sample-complexity separation from architectures constrained by the Weisfeiler-Lehman test.
arXiv Detail & Related papers (2025-11-24T12:20:36Z) - Unified Generative Latent Representation for Functional Brain Graphs [0.341987335587885]
Functional brain graphs are often characterized with separate graph-theoretic or spectral descriptors.<n>We estimate this unified graph representation through a graph transformer autoencoder with latent diffusion.<n>From the diffusion modeled distribution, we were able to sample biologically plausible and structurally grounded synthetic dense graphs.
arXiv Detail & Related papers (2025-11-06T16:52:49Z) - Training-Free Spectral Fingerprints of Voice Processing in Transformers [0.0]
We show that different transformer architectures implement identical linguistic computations via distinct connectivity patterns.<n>Using graph signal processing on attention induced token graphs, we track changes in connectivity across 20 languages and three model families.
arXiv Detail & Related papers (2025-10-21T23:33:43Z) - Mitigating Multimodal Hallucinations via Gradient-based Self-Reflection [49.26064449816502]
We propose a Gradient-based Influence-Aware Constrained Decoding (GACD) method to address text-visual bias and co-occurrence bias.<n>GACD effectively reduces hallucinations and improves the visual grounding of MLLM outputs.
arXiv Detail & Related papers (2025-09-03T08:13:52Z) - Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations [82.42811602081692]
This paper introduces a subsequence association framework to systematically trace and understand hallucinations.<n>Key insight is hallucinations that arise when dominant hallucinatory associations outweigh faithful ones.<n>We propose a tracing algorithm that identifies causal subsequences by analyzing hallucination probabilities across randomized input contexts.
arXiv Detail & Related papers (2025-04-17T06:34:45Z) - Hallucination Detection in LLMs with Topological Divergence on Attention Graphs [60.83579255387347]
Hallucination, i.e., generating factually incorrect content, remains a critical challenge for large language models.<n>We introduce TOHA, a TOpology-based HAllucination detector in the RAG setting.
arXiv Detail & Related papers (2025-04-14T10:06:27Z) - HoloNets: Spectral Convolutions do extend to Directed Graphs [59.851175771106625]
Conventional wisdom dictates that spectral convolutional networks may only be deployed on undirected graphs.
Here we show this traditional reliance on the graph Fourier transform to be superfluous.
We provide a frequency-response interpretation of newly developed filters, investigate the influence of the basis used to express filters and discuss the interplay with characteristic operators on which networks are based.
arXiv Detail & Related papers (2023-10-03T17:42:09Z) - A Theoretical Understanding of Shallow Vision Transformers: Learning,
Generalization, and Sample Complexity [71.11795737362459]
ViTs with self-attention modules have recently achieved great empirical success in many tasks.
However, theoretical learning generalization analysis is mostly noisy and elusive.
This paper provides the first theoretical analysis of a shallow ViT for a classification task.
arXiv Detail & Related papers (2023-02-12T22:12:35Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - Gaussian Processes on Graphs via Spectral Kernel Learning [9.260186030255081]
We propose a graph spectrum-based Gaussian process for prediction of signals defined on nodes of the graph.
We demonstrate the interpretability of the model in synthetic experiments from which we show the various ground truth spectral filters can be accurately recovered.
arXiv Detail & Related papers (2020-06-12T17:51:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.