Abstraction Alignment: Comparing Model and Human Conceptual Relationships
- URL: http://arxiv.org/abs/2407.12543v1
- Date: Wed, 17 Jul 2024 13:27:26 GMT
- Title: Abstraction Alignment: Comparing Model and Human Conceptual Relationships
- Authors: Angie Boggust, Hyemin Bang, Hendrik Strobelt, Arvind Satyanarayan,
- Abstract summary: We introduce abstraction alignment, a methodology to measure the agreement between a model's learned abstraction and the expected human abstraction.
In evaluation tasks, abstraction alignment provides a deeper understanding of model behavior and dataset content.
- Score: 26.503178592074757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abstraction -- the process of generalizing specific examples into broad reusable patterns -- is central to how people efficiently process and store information and apply their knowledge to new data. Promisingly, research has shown that ML models learn representations that span levels of abstraction, from specific concepts like "bolo tie" and "car tire" to more general concepts like "CEO" and "model". However, existing techniques analyze these representations in isolation, treating learned concepts as independent artifacts rather than an interconnected web of abstraction. As a result, although we can identify the concepts a model uses to produce its output, it is difficult to assess if it has learned a human-aligned abstraction of the concepts that will generalize to new data. To address this gap, we introduce abstraction alignment, a methodology to measure the agreement between a model's learned abstraction and the expected human abstraction. We quantify abstraction alignment by comparing model outputs against a human abstraction graph, such as linguistic relationships or medical disease hierarchies. In evaluation tasks interpreting image models, benchmarking language models, and analyzing medical datasets, abstraction alignment provides a deeper understanding of model behavior and dataset content, differentiating errors based on their agreement with human knowledge, expanding the verbosity of current model quality metrics, and revealing ways to improve existing human abstractions.
Related papers
- Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models [65.82564074712836]
We introduce DIFfusionHOI, a new HOI detector shedding light on text-to-image diffusion models.
We first devise an inversion-based strategy to learn the expression of relation patterns between humans and objects in embedding space.
These learned relation embeddings then serve as textual prompts, to steer diffusion models generate images that depict specific interactions.
arXiv Detail & Related papers (2024-10-26T12:00:33Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Emergence and Function of Abstract Representations in Self-Supervised
Transformers [0.0]
We study the inner workings of small-scale transformers trained to reconstruct partially masked visual scenes.
We show that the network develops intermediate abstract representations, or abstractions, that encode all semantic features of the dataset.
Using precise manipulation experiments, we demonstrate that abstractions are central to the network's decision-making process.
arXiv Detail & Related papers (2023-12-08T20:47:15Z) - The Relational Bottleneck as an Inductive Bias for Efficient Abstraction [3.19883356005403]
We show that neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs.
We review a family of models that employ this approach to induce abstractions in a data-efficient manner.
arXiv Detail & Related papers (2023-09-12T22:44:14Z) - Systematic Visual Reasoning through Object-Centric Relational
Abstraction [5.914610036560008]
We introduce OCRA, a model that extracts explicit representations of both objects and abstract relations.
It achieves strong systematic generalizations in tasks involving complex visual displays.
arXiv Detail & Related papers (2023-06-04T22:47:17Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Towards Computing an Optimal Abstraction for Structural Causal Models [16.17846886492361]
We focus on the problem of learning abstractions.
We suggest a concrete measure of information loss, and we illustrate its contribution to learning new abstractions.
arXiv Detail & Related papers (2022-08-01T14:35:57Z) - Dissecting Generation Modes for Abstractive Summarization Models via
Ablation and Attribution [34.2658286826597]
We propose a two-step method to interpret summarization model decisions.
We first analyze the model's behavior by ablating the full model to categorize each decoder decision into one of several generation modes.
After isolating decisions that do depend on the input, we explore interpreting these decisions using several different attribution methods.
arXiv Detail & Related papers (2021-06-03T00:54:16Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.