Sheaves as a Framework for Understanding and Interpreting Model Fit
- URL: http://arxiv.org/abs/2105.10414v1
- Date: Fri, 21 May 2021 15:34:09 GMT
- Title: Sheaves as a Framework for Understanding and Interpreting Model Fit
- Authors: Henry Kvinge, Brett Jefferson, Cliff Joslyn, Emilie Purvine
- Abstract summary: We argue that sheaves can provide a natural framework to analyze how well a statistical model fits at the local level.
The sheaf-based approach is suitably general enough to be useful in a range of applications.
- Score: 2.867517731896504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As data grows in size and complexity, finding frameworks which aid in
interpretation and analysis has become critical. This is particularly true when
data comes from complex systems where extensive structure is available, but
must be drawn from peripheral sources. In this paper we argue that in such
situations, sheaves can provide a natural framework to analyze how well a
statistical model fits at the local level (that is, on subsets of related
datapoints) vs the global level (on all the data). The sheaf-based approach
that we propose is suitably general enough to be useful in a range of
applications, from analyzing sensor networks to understanding the feature space
of a deep learning model.
Related papers
- Logifold: A Geometrical Foundation of Ensemble Machine Learning [0.0]
We present a local-to-global and measure-theoretical approach to understanding datasets.
The core idea is to formulate a logifold structure and to interpret network models with restricted domains as local charts of datasets.
arXiv Detail & Related papers (2024-07-23T04:47:58Z) - Prospector Heads: Generalized Feature Attribution for Large Models & Data [82.02696069543454]
We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods.
We demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data.
arXiv Detail & Related papers (2024-02-18T23:01:28Z) - Surprisal Driven $k$-NN for Robust and Interpretable Nonparametric
Learning [1.4293924404819704]
We shed new light on the traditional nearest neighbors algorithm from the perspective of information theory.
We propose a robust and interpretable framework for tasks such as classification, regression, density estimation, and anomaly detection using a single model.
Our work showcases the architecture's versatility by achieving state-of-the-art results in classification and anomaly detection.
arXiv Detail & Related papers (2023-11-17T00:35:38Z) - On the Generalization of Learned Structured Representations [5.1398743023989555]
We study methods that learn, with little or no supervision, representations of unstructured data that capture its hidden structure.
The second part of this thesis focuses on object-centric representations, which capture the compositional structure of the input in terms of symbol-like entities.
arXiv Detail & Related papers (2023-04-25T17:14:36Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Bending Graphs: Hierarchical Shape Matching using Gated Optimal
Transport [80.64516377977183]
Shape matching has been a long-studied problem for the computer graphics and vision community.
We investigate a hierarchical learning design, to which we incorporate local patch-level information and global shape-level structures.
We propose a novel optimal transport solver by recurrently updating features on non-confident nodes to learn globally consistent correspondences between the shapes.
arXiv Detail & Related papers (2022-02-03T11:41:46Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Capturing Structural Locality in Non-parametric Language Models [85.94669097485992]
We propose a simple yet effective approach for adding locality information into non-parametric language models.
Experiments on two different domains, Java source code and Wikipedia text, demonstrate that locality features improve model efficacy.
arXiv Detail & Related papers (2021-10-06T15:53:38Z) - A Topological-Framework to Improve Analysis of Machine Learning Model
Performance [5.3893373617126565]
We propose a framework for evaluating machine learning models in which a dataset is treated as a "space" on which a model operates.
We describe a topological data structure, presheaves, which offer a convenient way to store and analyze model performance between different subpopulations.
arXiv Detail & Related papers (2021-07-09T23:11:13Z) - Global Context Aware RCNN for Object Detection [1.1939762265857436]
We propose a novel end-to-end trainable framework, called Global Context Aware (GCA) RCNN.
The core component of GCA framework is a context aware mechanism, in which both global feature pyramid and attention strategies are used for feature extraction and feature refinement.
In the end, we also present a lightweight version of our method, which only slightly increases model complexity and computational burden.
arXiv Detail & Related papers (2020-12-04T14:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.