The Deleuzian Representation Hypothesis
- URL: http://arxiv.org/abs/2512.19734v1
- Date: Wed, 17 Dec 2025 11:51:25 GMT
- Title: The Deleuzian Representation Hypothesis
- Authors: Clément Cornet, Romaric Besançon, Hervé Le Borgne,
- Abstract summary: We propose an alternative to sparse autoencoders (SAEs) as a simple and effective unsupervised method for extracting interpretable concepts from neural networks.<n>The core idea is to cluster differences in activations, which we formally justify within a discriminant analysis framework.<n>We evaluate the approach across five models and three modalities, measuring concept quality, diversity, and consistency.
- Score: 5.593753320304013
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose an alternative to sparse autoencoders (SAEs) as a simple and effective unsupervised method for extracting interpretable concepts from neural networks. The core idea is to cluster differences in activations, which we formally justify within a discriminant analysis framework. To enhance the diversity of extracted concepts, we refine the approach by weighting the clustering using the skewness of activations. The method aligns with Deleuze's modern view of concepts as differences. We evaluate the approach across five models and three modalities (vision, language, and audio), measuring concept quality, diversity, and consistency. Our results show that the proposed method achieves concept quality surpassing prior unsupervised SAE variants while approaching supervised baselines, and that the extracted concepts enable steering of a model's inner representations, demonstrating their causal influence on downstream behavior.
Related papers
- Differential Vector Erasure: Unified Training-Free Concept Erasure for Flow Matching Models [49.10620605347065]
We propose Differential Vector Erasure (DVE), a training-free concept erasure method specifically designed for flow matching models.<n>Our key insight is that semantic concepts are implicitly encoded in the directional structure of the velocity field governing the generative flow.<n>During inference, DVE selectively removes concept-specific components by projecting the velocity field onto the differential direction, enabling precise concept suppression without affecting irrelevant semantics.
arXiv Detail & Related papers (2026-02-01T08:05:45Z) - Clarity: The Flexibility-Interpretability Trade-Off in Sparsity-aware Concept Bottleneck Models [12.322360020814516]
Vision-Language Models (VLMs) are often treated as black-boxes, with limited or non-existent investigation of their decision making process.<n>We introduce the notion of clarity, a measure, capturing the interplay between the downstream performance and the sparsity and precision of the concept representation.<n>Our experiments reveal a critical trade-off between flexibility and interpretability, under which a given method can exhibit markedly different behaviors even at comparable performance levels.
arXiv Detail & Related papers (2026-01-29T16:28:55Z) - Towards more holistic interpretability: A lightweight disentangled Concept Bottleneck Model [5.700536552863068]
Concept Bottleneck Models (CBMs) enhance interpretability by predicting human-understandable concepts as intermediate representations.<n>We propose a lightweight Disentangled Concept Bottleneck Model (LDCBM) that automatically groups visual features into semantically meaningful components.<n> Experiments on three diverse datasets demonstrate that LDCBM achieves higher concept and class accuracy, outperforming previous CBMs in both interpretability and classification performance.
arXiv Detail & Related papers (2025-10-17T15:59:30Z) - FACE: Faithful Automatic Concept Extraction [4.417419748257645]
FACE (Faithful Automatic Concept Extraction) is a novel framework that augments Non-negative Matrix Factorization (NMF) with a Kullback-Leibler (KL) divergence regularization term to ensure alignment between the model's original and concept-based predictions.<n>We provide theoretical guarantees showing that minimizing the KL divergence bounds the deviation in predictive distributions, thereby promoting faithful local linearity in the learned concept space.
arXiv Detail & Related papers (2025-10-13T17:44:45Z) - Interpretable Reward Modeling with Active Concept Bottlenecks [54.00085739303773]
We introduce Concept Bottleneck Reward Models (CB-RM), a reward modeling framework that enables interpretable preference learning.<n>Unlike standard RLHF methods that rely on opaque reward functions, CB-RM decomposes reward prediction into human-interpretable concepts.<n>We formalize an active learning strategy that dynamically acquires the most informative concept labels.
arXiv Detail & Related papers (2025-07-07T06:26:04Z) - Quantifying Structure in CLIP Embeddings: A Statistical Framework for Concept Interpretation [13.206499575700219]
Concept-based approaches aim to identify human-understandable concepts within a model's internal representations.<n>Current methods lack statistical rigor, making it challenging to validate identified concepts and compare different techniques.<n>We propose a hypothesis testing framework that quantifies rotation-sensitive structures within the CLIP embedding space.<n>Unlike existing approaches, it offers theoretical guarantees that discovered concepts represent robust, reproducible patterns.
arXiv Detail & Related papers (2025-06-16T02:43:11Z) - Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts [79.18608192761512]
Self-Explainable Models (SEMs) rely on Prototypical Concept Learning (PCL) to enable their visual recognition processes more interpretable.<n>We propose a Few-Shot Prototypical Concept Classification framework that mitigates two key challenges under low-data regimes: parametric imbalance and representation misalignment.<n>Our approach consistently outperforms existing SEMs by a notable margin, with 4.2%-8.7% relative gains in 5-way 5-shot classification.
arXiv Detail & Related papers (2025-06-05T06:39:43Z) - Understanding Distributed Representations of Concepts in Deep Neural
Networks without Supervision [25.449397570387802]
We propose an unsupervised method for discovering distributed representations of concepts by selecting a principal subset of neurons.
Our empirical findings demonstrate that instances with similar neuron activation states tend to share coherent concepts.
It can be utilized to identify unlabeled subclasses within data and to detect the causes of misclassifications.
arXiv Detail & Related papers (2023-12-28T07:33:51Z) - Deep Concept Removal [29.65899467379793]
We address the problem of concept removal in deep neural networks.
We propose a novel method based on adversarial linear classifiers trained on a concept dataset.
We also introduce an implicit gradient-based technique to tackle the challenges associated with adversarial training.
arXiv Detail & Related papers (2023-10-09T14:31:03Z) - Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations [44.033369364364084]
This work attempts to explain CNN image classifier predictions and intermediate layer representations in terms of human-understandable concepts.<n>We take a bottom-up approach, identifying the directions from the structure of the feature space, collectively, without relying on supervision from concept labels.<n>We make extensions to existing basis interpretability metrics and show that intermediate layer representations become more interpretable when transformed with the extracted bases.
arXiv Detail & Related papers (2023-03-19T00:37:19Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Evaluation of Self-taught Learning-based Representations for Facial
Emotion Recognition [62.30451764345482]
This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition.
The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data.
Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2022-04-26T22:48:15Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.