Representation Learning on a Random Lattice
- URL: http://arxiv.org/abs/2504.20197v1
- Date: Mon, 28 Apr 2025 19:01:36 GMT
- Title: Representation Learning on a Random Lattice
- Authors: Aryeh Brill,
- Abstract summary: Decomposing a deep neural network's learned representations into interpretable features could greatly enhance its safety and reliability.<n>We motivate a model of a generic data distribution as a random lattice and analyze its properties using percolation theory.<n> Learned features are categorized into context, component, and surface features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decomposing a deep neural network's learned representations into interpretable features could greatly enhance its safety and reliability. To better understand features, we adopt a geometric perspective, viewing them as a learned coordinate system for mapping an embedded data distribution. We motivate a model of a generic data distribution as a random lattice and analyze its properties using percolation theory. Learned features are categorized into context, component, and surface features. The model is qualitatively consistent with recent findings in mechanistic interpretability and suggests directions for future research.
Related papers
- Prospector Heads: Generalized Feature Attribution for Large Models & Data [82.02696069543454]
We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods.
We demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data.
arXiv Detail & Related papers (2024-02-18T23:01:28Z) - Geometric Prior Guided Feature Representation Learning for Long-Tailed Classification [47.09355487357069]
We propose to leverage the geometric information of the feature distribution of the well-represented head class to guide the model to learn the underlying distribution of the tail class.
It aims to make the perturbed features cover the underlying distribution of the tail class as much as possible, thus improving the model's generalization performance in the test domain.
arXiv Detail & Related papers (2024-01-21T09:16:29Z) - Harmonizing Feature Attributions Across Deep Learning Architectures:
Enhancing Interpretability and Consistency [2.2237337682863125]
This study examines the generalization of feature attributions across various deep learning architectures.
We aim to develop a more coherent and optimistic understanding of feature attributions.
Our findings highlight the potential for harmonized feature attribution methods to improve interpretability and foster trust in machine learning applications.
arXiv Detail & Related papers (2023-07-05T09:46:41Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Bending Graphs: Hierarchical Shape Matching using Gated Optimal
Transport [80.64516377977183]
Shape matching has been a long-studied problem for the computer graphics and vision community.
We investigate a hierarchical learning design, to which we incorporate local patch-level information and global shape-level structures.
We propose a novel optimal transport solver by recurrently updating features on non-confident nodes to learn globally consistent correspondences between the shapes.
arXiv Detail & Related papers (2022-02-03T11:41:46Z) - Convolutional Motif Kernel Networks [1.104960878651584]
We show that our model is able to robustly learn on small datasets and reaches state-of-the-art performance on relevant healthcare prediction tasks.
Our proposed method can be utilized on DNA and protein sequences.
arXiv Detail & Related papers (2021-11-03T15:06:09Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Toward Understanding the Feature Learning Process of Self-supervised
Contrastive Learning [43.504548777955854]
We study how contrastive learning learns the feature representations for neural networks by analyzing its feature learning process.
We prove that contrastive learning using textbfReLU networks provably learns the desired sparse features if proper augmentations are adopted.
arXiv Detail & Related papers (2021-05-31T16:42:09Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Probabilistic Graph Attention Network with Conditional Kernels for
Pixel-Wise Prediction [158.88345945211185]
We present a novel approach that advances the state of the art on pixel-level prediction in a fundamental aspect, i.e. structured multi-scale features learning and fusion.
We propose a probabilistic graph attention network structure based on a novel Attention-Gated Conditional Random Fields (AG-CRFs) model for learning and fusing multi-scale representations in a principled manner.
arXiv Detail & Related papers (2021-01-08T04:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.