Internal Representations of Vision Models Through the Lens of Frames on
Data Manifolds
- URL: http://arxiv.org/abs/2211.10558v2
- Date: Wed, 6 Dec 2023 17:36:25 GMT
- Title: Internal Representations of Vision Models Through the Lens of Frames on
Data Manifolds
- Authors: Henry Kvinge, Grayson Jorgenson, Davis Brown, Charles Godfrey, Tegan
Emerson
- Abstract summary: We present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold.
Our construction, which we call a neural frame, is formed by assembling a set of vectors representing specific types of perturbations of a data point.
Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint.
- Score: 8.67467876089153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the last five years have seen considerable progress in understanding
the internal representations of deep learning models, many questions remain.
This is especially true when trying to understand the impact of model design
choices, such as model architecture or training algorithm, on hidden
representation geometry and dynamics. In this work we present a new approach to
studying such representations inspired by the idea of a frame on the tangent
bundle of a manifold. Our construction, which we call a neural frame, is formed
by assembling a set of vectors representing specific types of perturbations of
a data point, for example infinitesimal augmentations, noise perturbations, or
perturbations produced by a generative model, and studying how these change as
they pass through a network. Using neural frames, we make observations about
the way that models process, layer-by-layer, specific modes of variation within
a small neighborhood of a datapoint. Our results provide new perspectives on a
number of phenomena, such as the manner in which training with augmentation
produces model invariance or the proposed trade-off between adversarial
training and model generalization.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Masked Modeling for Self-supervised Representation Learning on Vision
and Beyond [69.64364187449773]
Masked modeling has emerged as a distinctive approach that involves predicting parts of the original data that are proportionally masked during training.
We elaborate on the details of techniques within masked modeling, including diverse masking strategies, recovering targets, network architectures, and more.
We conclude by discussing the limitations of current techniques and point out several potential avenues for advancing masked modeling research.
arXiv Detail & Related papers (2023-12-31T12:03:21Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Unifying Generative Models with GFlowNets [85.38102320953551]
We present a short note on the connections between existing deep generative models and the GFlowNet framework, shedding light on their overlapping traits.
This provides a means for unifying training and inference algorithms, and provides a route to construct an agglomeration of generative models.
arXiv Detail & Related papers (2022-09-06T15:52:51Z) - Geometric and Topological Inference for Deep Representations of Complex
Networks [13.173307471333619]
We present a class of statistics that emphasize the topology as well as the geometry of representations.
We evaluate these statistics in terms of the sensitivity and specificity that they afford when used for model selection.
These new methods enable brain and computer scientists to visualize the dynamic representational transformations learned by brains and models.
arXiv Detail & Related papers (2022-03-10T17:14:14Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - A Visual Analytics Framework for Explaining and Diagnosing Transfer
Learning Processes [42.57604833160855]
We present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks.
Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks.
arXiv Detail & Related papers (2020-09-15T05:59:00Z) - Network Bending: Expressive Manipulation of Deep Generative Models [0.2062593640149624]
We introduce a new framework for manipulating and interacting with deep generative models that we call network bending.
We show how it allows for the direct manipulation of semantically meaningful aspects of the generative process as well as allowing for a broad range of expressive outcomes.
arXiv Detail & Related papers (2020-05-25T21:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.