Geometry of Reason: Spectral Signatures of Valid Mathematical Reasoning
- URL: http://arxiv.org/abs/2601.00791v1
- Date: Fri, 02 Jan 2026 18:49:37 GMT
- Title: Geometry of Reason: Spectral Signatures of Valid Mathematical Reasoning
- Authors: Valentin Noël,
- Abstract summary: We present a training-free method for detecting valid mathematical reasoning in large language models through spectral analysis of attention patterns.<n>The method requires no training data, fine-tuning, or learned classifiers: a single threshold on a spectral metric suffices for high accuracy.<n>These findings establish spectral graph analysis as a principled framework for reasoning verification with immediate applications to hallucination detection and AI safety monitoring.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a training-free method for detecting valid mathematical reasoning in large language models through spectral analysis of attention patterns. By treating attention matrices as adjacency matrices of dynamic graphs over tokens, we extract four interpretable spectral diagnostics, the Fiedler value (algebraic connectivity), high-frequency energy ratio (HFER), graph signal smoothness, and spectral entropy, that exhibit statistically significant differences between valid and invalid mathematical proofs. Experiments across seven transformer models from four independent architectural families (Meta Llama, Alibaba Qwen, Microsoft Phi, and Mistral AI) demonstrate that this spectral signature produces effect sizes up to Cohen's $d = 3.30$ ($p < 10^{-116}$), enabling 85.0--95.6\% classification accuracy under rigorous evaluation, with calibrated thresholds reaching 93--95\% on the full dataset. The method requires no training data, fine-tuning, or learned classifiers: a single threshold on a spectral metric suffices for high accuracy. Through systematic label correction, we discover that the spectral method detects logical coherence rather than compiler acceptance, identifying mathematically valid proofs that formal verifiers reject due to technical failures. We further identify an architectural dependency: Mistral-7B's Sliding Window Attention shifts the discriminative signal from HFER to late-layer Smoothness ($d = 2.09$, $p_{\text{MW}} = 1.16 \times 10^{-48}$), revealing that attention mechanism design affects which spectral features capture reasoning validity. These findings establish spectral graph analysis as a principled framework for reasoning verification with immediate applications to hallucination detection and AI safety monitoring.
Related papers
- On the Spectral Flattening of Quantized Embeddings [25.64641307046705]
Training Large Language Models at ultra-low precision is critically impeded by instability rooted in the conflict between discrete quantization constraints and the intrinsic heavy-tailed spectral nature of linguistic data.<n>This work not only quantifies the spectral sensitivity of LLMs but also establishes spectral fidelity as a necessary condition for stable low-bit optimization.
arXiv Detail & Related papers (2026-02-01T02:21:53Z) - Spectral Geometry for Deep Learning: Compression and Hallucination Detection via Random Matrix Theory [0.0]
This thesis proposes a unified framework based on spectral geometry and random matrix theory to address both problems.<n>The first contribution, EigenTrack, is a real-time method for detecting hallucinations and out-of-distribution behavior in language and vision-language models.<n>The second contribution, RMT-KD, is a principled compression method that identifies informative spectral components.
arXiv Detail & Related papers (2026-01-24T08:07:22Z) - Spectral Archaeology: The Causal Topology of Model Evolution [0.0]
Behavioral benchmarks tell us textitwhat a model does, but not textithow.<n>We introduce a training-free mechanistic probe using attention-graph spectra.<n>Across 12 models and 10 languages, these measures yield stable fingerprints'' that expose discontinuities missed by standard evaluation.
arXiv Detail & Related papers (2026-01-06T21:26:54Z) - SIGMA: Scalable Spectral Insights for LLM Collapse [51.863164847253366]
We introduce SIGMA (Spectral Inequalities for Gram Matrix Analysis), a unified framework for model collapse.<n>By utilizing benchmarks that deriving and deterministic bounds on the matrix's spectrum, SIGMA provides a mathematically grounded metric to track the contraction of the representation space.<n>We demonstrate that SIGMA effectively captures the transition towards states, offering both theoretical insights into the mechanics of collapse.
arXiv Detail & Related papers (2026-01-06T19:47:11Z) - A Graph Signal Processing Framework for Hallucination Detection in Large Language Models [0.0]
We show that factual statements exhibit consistent "energy mountain" behavior with low-frequency convergence, while different hallucination types show distinct signatures.<n>A simple detector using spectral signatures achieves 88.75% accuracy versus 75% for perplexity-based baselines.<n>These findings indicate that spectral geometry may capture reasoning patterns and error behaviors, potentially offering a framework for detection in large language models.
arXiv Detail & Related papers (2025-10-21T22:35:48Z) - From Eigenmodes to Proofs: Integrating Graph Spectral Operators with Symbolic Interpretable Reasoning [0.0]
We introduce Spectral NSR, a fully spectral neuro-symbolic reasoning framework.<n>It embeds logical rules as spectral templates and performs inference directly in the graph spectral domain.<n>We show that Spectral NSR achieves superior accuracy, faster inference, improved robustness to adversarial perturbations, and higher interpretability compared to leading baselines.
arXiv Detail & Related papers (2025-09-07T01:12:20Z) - SpectrumFM: Redefining Spectrum Cognition via Foundation Modeling [65.65474629224558]
We propose a spectrum foundation model, termed SpectrumFM, which provides a new paradigm for spectrum cognition.<n>An innovative spectrum encoder that exploits the convolutional neural networks is proposed to effectively capture both fine-grained local signal structures and high-level global dependencies in the spectrum data.<n>Two novel self-supervised learning tasks, namely masked reconstruction and next-slot signal prediction, are developed for pre-training SpectrumFM, enabling the model to learn rich and transferable representations.
arXiv Detail & Related papers (2025-08-02T14:40:50Z) - Rethinking Contrastive Learning in Graph Anomaly Detection: A Clean-View Perspective [54.605073936695575]
Graph anomaly detection aims to identify unusual patterns in graph-based data, with wide applications in fields such as web security and financial fraud detection.<n>Existing methods rely on contrastive learning, assuming that a lower similarity between a node and its local subgraph indicates abnormality.<n>The presence of interfering edges invalidates this assumption, since it introduces disruptive noise that compromises the contrastive learning process.<n>We propose a Clean-View Enhanced Graph Anomaly Detection framework (CVGAD), which includes a multi-scale anomaly awareness module to identify key sources of interference in the contrastive learning process.
arXiv Detail & Related papers (2025-05-23T15:05:56Z) - A Self-supervised Learning Method for Raman Spectroscopy based on Masked Autoencoders [3.9517125314802306]
We propose a self-supervised learning paradigm for Raman spectroscopy based on a Masked AutoEncoder, termed SMAE.<n> SMAE does not require any spectral annotations during pre-training. By randomly masking and then reconstructing the spectral information, the model learns essential spectral features.
arXiv Detail & Related papers (2025-04-21T10:44:06Z) - Towards Anomaly-Aware Pre-Training and Fine-Tuning for Graph Anomaly Detection [59.042018542376596]
Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet remains challenging due to two key factors.<n>Anomaly-Aware Pre-Training and Fine-Tuning (APF) is a framework to mitigate the challenges in GAD.<n> Comprehensive experiments on 10 benchmark datasets validate the superior performance of APF in comparison to state-of-the-art baselines.
arXiv Detail & Related papers (2025-04-19T09:57:35Z) - Hallucination Detection in LLMs with Topological Divergence on Attention Graphs [60.83579255387347]
Hallucination, i.e., generating factually incorrect content, remains a critical challenge for large language models.<n>We introduce TOHA, a TOpology-based HAllucination detector in the RAG setting.
arXiv Detail & Related papers (2025-04-14T10:06:27Z) - Graph Structural Attack by Spectral Distance [35.998704625736394]
Graph Convolutional Networks (GCNs) have fueled a surge of interest due to their superior performance on graph learning tasks.
In this paper, an effective graph structural attack is investigated to disrupt graph spectral filters in the Fourier domain.
arXiv Detail & Related papers (2021-11-01T04:02:34Z) - Offline detection of change-points in the mean for stationary graph
signals [55.98760097296213]
We propose an offline method that relies on the concept of graph signal stationarity.
Our detector comes with a proof of a non-asymptotic inequality oracle.
arXiv Detail & Related papers (2020-06-18T15:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.