Automatic Concept Extraction for Concept Bottleneck-based Video
Classification
- URL: http://arxiv.org/abs/2206.10129v1
- Date: Tue, 21 Jun 2022 06:22:35 GMT
- Title: Automatic Concept Extraction for Concept Bottleneck-based Video
Classification
- Authors: Jeya Vikranth Jeyakumar, Luke Dickens, Luis Garcia, Yu-Hsi Cheng,
Diego Ramirez Echavarria, Joseph Noor, Alessandra Russo, Lance Kaplan, Erik
Blasch, Mani Srivastava
- Abstract summary: We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
- Score: 58.11884357803544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent efforts in interpretable deep learning models have shown that
concept-based explanation methods achieve competitive accuracy with standard
end-to-end models and enable reasoning and intervention about extracted
high-level visual concepts from images, e.g., identifying the wing color and
beak length for bird-species classification. However, these concept bottleneck
models rely on a necessary and sufficient set of predefined concepts-which is
intractable for complex tasks such as video classification. For complex tasks,
the labels and the relationship between visual elements span many frames, e.g.,
identifying a bird flying or catching prey-necessitating concepts with various
levels of abstraction. To this end, we present CoDEx, an automatic Concept
Discovery and Extraction module that rigorously composes a necessary and
sufficient set of concept abstractions for concept-based video classification.
CoDEx identifies a rich set of complex concept abstractions from natural
language explanations of videos-obviating the need to predefine the amorphous
set of concepts. To demonstrate our method's viability, we construct two new
public datasets that combine existing complex video classification datasets
with short, crowd-sourced natural language explanations for their labels. Our
method elicits inherent complex concept abstractions in natural language to
generalize concept-bottleneck methods to complex tasks.
Related papers
- From Concrete to Abstract: A Multimodal Generative Approach to Abstract Concept Learning [3.645603633040378]
This paper introduces a multimodal generative approach to high order abstract concept learning.
Our model initially grounds subordinate level concrete concepts, combines them to form basic level concepts, and finally abstracts to superordinate level concepts.
We evaluate the model language learning ability through language-to-visual and visual-to-language tests with high order abstract concepts.
arXiv Detail & Related papers (2024-10-03T10:24:24Z) - Explainable Concept Generation through Vision-Language Preference Learning [7.736445799116692]
Concept-based explanations have become a popular choice for explaining deep neural networks post-hoc.
We devise a reinforcement learning-based preference optimization algorithm that fine-tunes the vision-language generative model.
In addition to showing the efficacy and reliability of our method, we show how our method can be used as a diagnostic tool for analyzing neural networks.
arXiv Detail & Related papers (2024-08-24T02:26:42Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.