Towards Interpretable Hallucination Analysis and Mitigation in LVLMs via Contrastive Neuron Steering
- URL: http://arxiv.org/abs/2602.00621v1
- Date: Sat, 31 Jan 2026 09:21:04 GMT
- Title: Towards Interpretable Hallucination Analysis and Mitigation in LVLMs via Contrastive Neuron Steering
- Authors: Guangtao Lyu, Xinyi Cheng, Qi Liu, Chenghao Xu, Jiexi Yan, Muli Yang, Fen Fang, Cheng Deng,
- Abstract summary: Existing mitigation methods predominantly focus on output-level adjustments, leaving internal mechanisms that give rise to hallucinations largely unexplored.<n>We propose Contrastive Neuron Steering ( CNS), which identifies image-specific neurons via contrastive analysis between clean and noisy inputs.<n> CNS selectively amplifies informative neurons while suppressing perturbation-induced activations, producing more robust and semantically grounded visual representations.
- Score: 60.23509717784518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LVLMs achieve remarkable multimodal understanding and generation but remain susceptible to hallucinations. Existing mitigation methods predominantly focus on output-level adjustments, leaving the internal mechanisms that give rise to these hallucinations largely unexplored. To gain a deeper understanding, we adopt a representation-level perspective by introducing sparse autoencoders (SAEs) to decompose dense visual embeddings into sparse, interpretable neurons. Through neuron-level analysis, we identify distinct neuron types, including always-on neurons and image-specific neurons. Our findings reveal that hallucinations often result from disruptions or spurious activations of image-specific neurons, while always-on neurons remain largely stable. Moreover, selectively enhancing or suppressing image-specific neurons enables controllable intervention in LVLM outputs, improving visual grounding and reducing hallucinations. Building on these insights, we propose Contrastive Neuron Steering (CNS), which identifies image-specific neurons via contrastive analysis between clean and noisy inputs. CNS selectively amplifies informative neurons while suppressing perturbation-induced activations, producing more robust and semantically grounded visual representations. This not only enhances visual understanding but also effectively mitigates hallucinations. By operating at the prefilling stage, CNS is fully compatible with existing decoding-stage methods. Extensive experiments on both hallucination-focused and general multimodal benchmarks demonstrate that CNS consistently reduces hallucinations while preserving overall multimodal understanding.
Related papers
- H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs [56.31565301428888]
We identify hallucination-associated neurons (H-Neurons) in large language models (LLMs)<n>In terms of identification, we demonstrate that a remarkably sparse subset of neurons can reliably predict hallucination occurrences.<n>In terms of behavioral impact, controlled interventions reveal that these neurons are causally linked to over-compliance behaviors.
arXiv Detail & Related papers (2025-12-01T15:32:14Z) - Know Thyself by Knowing Others: Learning Neuron Identity from Population Context [9.798773806523114]
We present the first systematic scaling analysis for neuron-level representation learning.<n>We show that increasing the number of animals used during pretraining consistently improves downstream performance.<n>Results highlight how large, diverse neural datasets enable models to recover information about neuron identity that generalize across animals.
arXiv Detail & Related papers (2025-12-01T02:28:04Z) - Neuronal Group Communication for Efficient Neural representation [85.36421257648294]
This paper addresses the question of how to build large neural systems that learn efficient, modular, and interpretable representations.<n>We propose Neuronal Group Communication (NGC), a theory-driven framework that reimagines a neural network as a dynamical system of interacting neuronal groups.<n>NGC treats weights as transient interactions between embedding-like neuronal states, with neural computation unfolding through iterative communication among groups of neurons.
arXiv Detail & Related papers (2025-10-19T14:23:35Z) - BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings [19.761793010311614]
We introduce BrainFLORA, a unified framework for integrating cross-modal neuroimaging data to construct a shared neural representation.<n>Our approach leverages multimodal large language models (MLLMs) augmented with modality-specific adapters and task decoders, achieving state-of-the-art performance in joint-subject visual retrieval.<n>BrainFLORA offers novel implications for cognitive neuroscience and brain-computer interfaces (BCIs)
arXiv Detail & Related papers (2025-07-13T18:56:17Z) - Spatiotemporal Learning of Brain Dynamics from fMRI Using Frequency-Specific Multi-Band Attention for Cognitive and Psychiatric Applications [5.199807441687141]
Multi-Band Net Brain (MBBN) is the first transformer-based framework to explicitly model frequency-specific brain dynamics.<n>Training on 49,673 individuals across three large-scale cohorts, MBBN sets a new state-of-the-art in predicting psychiatric and cognitive outcomes.
arXiv Detail & Related papers (2025-03-30T10:56:50Z) - Deciphering Functions of Neurons in Vision-Language Models [38.978287253624565]
This study aims to delve into the internals of vision-language models (VLMs) to interpret the functions of individual neurons.<n>We observe the activations of neurons with respects to the input visual tokens and text tokens, and reveal some interesting findings.<n>We build a framework that automates the explanation of neurons with the assistant of GPT-4o.<n>For visual neurons, we propose an activation simulator to assess the reliability of the explanations for visual neurons.
arXiv Detail & Related papers (2025-02-10T10:00:06Z) - Neurons Speak in Ranges: Breaking Free from Discrete Neuronal Attribution [16.460751105639623]
We show that even highly salient neurons consistently exhibit polysemantic behavior.<n>This observation motivates a shift from neuron attribution to range-based interpretation.<n>We introduce NeuronLens, a novel range-based interpretation and manipulation framework.
arXiv Detail & Related papers (2025-02-04T03:33:55Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network.<n>We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.<n>We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, quantification, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - ConceptLens: from Pixels to Understanding [1.3466710708566176]
ConceptLens is an innovative tool designed to illuminate the intricate workings of deep neural networks (DNNs) by visualizing hidden neuron activations.
By integrating deep learning with symbolic methods, ConceptLens offers users a unique way to understand what triggers neuron activations.
arXiv Detail & Related papers (2024-10-04T20:49:12Z) - Interpreting the Second-Order Effects of Neurons in CLIP [73.54377859089801]
We interpret the function of individual neurons in CLIP by automatically describing them using text.<n>We present the "second-order lens", analyzing the effect flowing from a neuron through the later attention heads, directly to the output.<n>Our results indicate that an automated interpretation of neurons can be used for model deception and for introducing new model capabilities.
arXiv Detail & Related papers (2024-06-06T17:59:52Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.