Identifying Sub-networks in Neural Networks via Functionally Similar Representations
- URL: http://arxiv.org/abs/2410.16484v2
- Date: Sat, 01 Feb 2025 11:45:33 GMT
- Title: Identifying Sub-networks in Neural Networks via Functionally Similar Representations
- Authors: Tian Gao, Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, Dennis Wei,
- Abstract summary: We take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks.
Specifically, we explore a novel automated and task-agnostic approach based on the notion of functionally similar representations within neural networks.
We show the proposed approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.
- Score: 41.028797971427124
- License:
- Abstract: Providing human-understandable insights into the inner workings of neural networks is an important step toward achieving more explainable and trustworthy AI. Existing approaches to such mechanistic interpretability typically require substantial prior knowledge and manual effort, with strategies tailored to specific tasks. In this work, we take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks. Specifically, we explore a novel automated and task-agnostic approach based on the notion of functionally similar representations within neural networks to identify similar and dissimilar layers, revealing potential sub-networks. We achieve this by proposing, for the first time to our knowledge, the use of Gromov-Wasserstein distance, which overcomes challenges posed by varying distributions and dimensionalities across intermediate representations, issues that complicate direct layer to layer comparisons. On algebraic, language, and vision tasks, we observe the emergence of sub-groups within neural network layers corresponding to functional abstractions. Through downstream applications of model compression and fine-tuning, we show the proposed approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Aligning Knowledge Graphs Provided by Humans and Generated from Neural Networks in Specific Tasks [5.791414814676125]
This paper develops an innovative method that enables neural networks to generate and utilize knowledge graphs.
Our approach eschews traditional dependencies on or word embedding models, mining concepts from neural networks and directly aligning them with human knowledge.
Experiments show that our method consistently captures network-generated concepts that align closely with human knowledge and can even uncover new, useful concepts not previously identified by humans.
arXiv Detail & Related papers (2024-04-23T20:33:17Z) - Finding Concept Representations in Neural Networks with Self-Organizing
Maps [2.817412580574242]
We show how self-organizing maps can be used to inspect how activation of layers of neural networks correspond to neural representations of abstract concepts.
We show that, among the measures tested, the relative entropy of the activation map for a concept is a suitable candidate and can be used as part of a methodology to identify and locate the neural representation of a concept.
arXiv Detail & Related papers (2023-12-10T12:10:34Z) - DISCOVER: Making Vision Networks Interpretable via Competition and
Dissection [11.028520416752325]
This work contributes to post-hoc interpretability, and specifically Network Dissection.
Our goal is to present a framework that makes it easier to discover the individual functionality of each neuron in a network trained on a vision task.
arXiv Detail & Related papers (2023-10-07T21:57:23Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Interpretable part-whole hierarchies and conceptual-semantic
relationships in neural networks [4.153804257347222]
We present Agglomerator, a framework capable of providing a representation of part-whole hierarchies from visual cues.
We evaluate our method on common datasets, such as SmallNORB, MNIST, FashionMNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-03-07T10:56:13Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.