Automatic Concept Extraction for Concept Bottleneck-based Video
Classification
- URL: http://arxiv.org/abs/2206.10129v1
- Date: Tue, 21 Jun 2022 06:22:35 GMT
- Title: Automatic Concept Extraction for Concept Bottleneck-based Video
Classification
- Authors: Jeya Vikranth Jeyakumar, Luke Dickens, Luis Garcia, Yu-Hsi Cheng,
Diego Ramirez Echavarria, Joseph Noor, Alessandra Russo, Lance Kaplan, Erik
Blasch, Mani Srivastava
- Abstract summary: We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
- Score: 58.11884357803544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent efforts in interpretable deep learning models have shown that
concept-based explanation methods achieve competitive accuracy with standard
end-to-end models and enable reasoning and intervention about extracted
high-level visual concepts from images, e.g., identifying the wing color and
beak length for bird-species classification. However, these concept bottleneck
models rely on a necessary and sufficient set of predefined concepts-which is
intractable for complex tasks such as video classification. For complex tasks,
the labels and the relationship between visual elements span many frames, e.g.,
identifying a bird flying or catching prey-necessitating concepts with various
levels of abstraction. To this end, we present CoDEx, an automatic Concept
Discovery and Extraction module that rigorously composes a necessary and
sufficient set of concept abstractions for concept-based video classification.
CoDEx identifies a rich set of complex concept abstractions from natural
language explanations of videos-obviating the need to predefine the amorphous
set of concepts. To demonstrate our method's viability, we construct two new
public datasets that combine existing complex video classification datasets
with short, crowd-sourced natural language explanations for their labels. Our
method elicits inherent complex concept abstractions in natural language to
generalize concept-bottleneck methods to complex tasks.
Related papers
- Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Neural Concept Binder [22.074896812195437]
We introduce the Neural Concept Binder, a new framework for deriving discrete concept representations.
These encodings leverage both "soft binding" via object-centric block-slot encodings and "hard binding" via retrieval-based inference.
We demonstrate that incorporating the hard binding mechanism does not compromise performance; instead, it enables seamless integration into both neural and symbolic modules.
arXiv Detail & Related papers (2024-06-14T11:52:09Z) - Coarse-to-Fine Concept Bottleneck Models [9.910980079138206]
This work targets ante hoc interpretability, and specifically Concept Bottleneck Models (CBMs)
Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on two levels of granularity.
Within this framework, concept information does not solely rely on the similarity between the whole image and general unstructured concepts; instead, we introduce the notion of concept hierarchy to uncover and exploit more granular concept information residing in patch-specific regions of the image scene.
arXiv Detail & Related papers (2023-10-03T14:57:31Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.