Geometric and Topological Inference for Deep Representations of Complex
Networks
- URL: http://arxiv.org/abs/2203.05488v2
- Date: Sat, 12 Mar 2022 19:25:34 GMT
- Title: Geometric and Topological Inference for Deep Representations of Complex
Networks
- Authors: Baihan Lin
- Abstract summary: We present a class of statistics that emphasize the topology as well as the geometry of representations.
We evaluate these statistics in terms of the sensitivity and specificity that they afford when used for model selection.
These new methods enable brain and computer scientists to visualize the dynamic representational transformations learned by brains and models.
- Score: 13.173307471333619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the deep representations of complex networks is an important
step of building interpretable and trustworthy machine learning applications in
the age of internet. Global surrogate models that approximate the predictions
of a black box model (e.g. an artificial or biological neural net) are usually
used to provide valuable theoretical insights for the model interpretability.
In order to evaluate how well a surrogate model can account for the
representation in another model, we need to develop inference methods for model
comparison. Previous studies have compared models and brains in terms of their
representational geometries (characterized by the matrix of distances between
representations of the input patterns in a model layer or cortical area). In
this study, we propose to explore these summary statistical descriptions of
representations in models and brains as part of a broader class of statistics
that emphasize the topology as well as the geometry of representations. The
topological summary statistics build on topological data analysis (TDA) and
other graph-based methods. We evaluate these statistics in terms of the
sensitivity and specificity that they afford when used for model selection,
with the goal to relate different neural network models to each other and to
make inferences about the computational mechanism that might best account for a
black box representation. These new methods enable brain and computer
scientists to visualize the dynamic representational transformations learned by
brains and models, and to perform model-comparative statistical inference.
Related papers
- Consistent estimation of generative model representations in the data
kernel perspective space [13.099029073152257]
Generative models, such as large language models and text-to-image diffusion models, produce relevant information when presented a query.
Different models may produce different information when presented the same query.
We present novel theoretical results for embedding-based representations of generative models in the context of a set of queries.
arXiv Detail & Related papers (2024-09-25T19:35:58Z) - Towards Compositional Interpretability for XAI [3.3768167170511587]
We present an approach to defining AI models and their interpretability based on category theory.
We compare a wide range of AI models as compositional models.
We find that what makes the standard 'intrinsically interpretable' models so transparent is brought out most clearly diagrammatically.
arXiv Detail & Related papers (2024-06-25T14:27:03Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Contrastive Topographic Models: Energy-based density models applied to
the understanding of sensory coding and cortical topography [9.555150216958246]
We address the problem of building theoretical models that help elucidate the function of the visual brain at computational/algorithmic and structural/mechanistic levels.
arXiv Detail & Related papers (2020-11-05T16:36:43Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.