From Concrete to Abstract: A Multimodal Generative Approach to Abstract Concept Learning
- URL: http://arxiv.org/abs/2410.02365v1
- Date: Thu, 3 Oct 2024 10:24:24 GMT
- Title: From Concrete to Abstract: A Multimodal Generative Approach to Abstract Concept Learning
- Authors: Haodong Xie, Rahul Singh Maharjan, Federico Tavella, Angelo Cangelosi,
- Abstract summary: This paper introduces a multimodal generative approach to high order abstract concept learning.
Our model initially grounds subordinate level concrete concepts, combines them to form basic level concepts, and finally abstracts to superordinate level concepts.
We evaluate the model language learning ability through language-to-visual and visual-to-language tests with high order abstract concepts.
- Score: 3.645603633040378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding and manipulating concrete and abstract concepts is fundamental to human intelligence. Yet, they remain challenging for artificial agents. This paper introduces a multimodal generative approach to high order abstract concept learning, which integrates visual and categorical linguistic information from concrete ones. Our model initially grounds subordinate level concrete concepts, combines them to form basic level concepts, and finally abstracts to superordinate level concepts via the grounding of basic-level concepts. We evaluate the model language learning ability through language-to-visual and visual-to-language tests with high order abstract concepts. Experimental results demonstrate the proficiency of the model in both language understanding and language naming tasks.
Related papers
- A Survey on Compositional Learning of AI Models: Theoretical and Experimental Practices [15.92779896185647]
Compositional learning is crucial for human cognition, especially in human language comprehension and visual perception.
Despite its integral role in intelligence, there is a lack of systematic theoretical and experimental research methodologies.
This paper surveys the literature on compositional learning of AI models and the connections made to cognitive studies.
arXiv Detail & Related papers (2024-06-13T03:46:21Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.