Towards Visual Semantics
- URL: http://arxiv.org/abs/2104.12379v1
- Date: Mon, 26 Apr 2021 07:28:02 GMT
- Title: Towards Visual Semantics
- Authors: Fausto Giunchiglia and Luca Erculiani and Andrea Passerini
- Abstract summary: We study how humans build mental representations, i.e., concepts, of what they visually perceive.
In this paper we provide a theory and an algorithm which learns substance concepts which correspond to the concepts, that we call classification concepts.
The experiments, though preliminary, show that the algorithm manages to acquire the notions of Genus and Differentia with reasonable accuracy.
- Score: 17.1623244298824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Visual Semantics we study how humans build mental representations, i.e.,
concepts , of what they visually perceive. We call such concepts, substance
concepts. In this paper we provide a theory and an algorithm which learns
substance concepts which correspond to the concepts, that we call
classification concepts , that in Lexical Semantics are used to encode word
meanings. The theory and algorithm are based on three main contributions: (i)
substance concepts are modeled as visual objects , namely sequences of similar
frames, as perceived in multiple encounters ; (ii) substance concepts are
organized into a visual subsumption hierarchy based on the notions of Genus and
Differentia that resemble the notions that, in Lexical Semantics, allow to
construct hierarchies of classification concepts; (iii) the human feedback is
exploited not to name objects, as it has been the case so far, but, rather, to
align the hierarchy of substance concepts with that of classification concepts.
The learning algorithm is implemented for the base case of a hierarchy of depth
two. The experiments, though preliminary, show that the algorithm manages to
acquire the notions of Genus and Differentia with reasonable accuracy, this
despite seeing a small number of examples and receiving supervision on a
fraction of them.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Identifying and interpreting non-aligned human conceptual
representations using language modeling [0.0]
We show that congenital blindness induces conceptual reorganization in both a-modal and sensory-related verbal domains.
We find that blind individuals more strongly associate social and cognitive meanings to verbs related to motion.
For some verbs, representations of blind and sighted are highly similar.
arXiv Detail & Related papers (2024-03-10T13:02:27Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Analyzing Encoded Concepts in Transformer Language Models [21.76062029833023]
ConceptX analyses how latent concepts are encoded in representations learned within pre-trained language models.
It uses clustering to discover the encoded concepts and explains them by aligning with a large set of human-defined concepts.
arXiv Detail & Related papers (2022-06-27T13:32:10Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - Building a visual semantics aware object hierarchy [0.0]
We propose a novel unsupervised method to build visual semantics aware object hierarchy.
Our intuition in this paper comes from real-world knowledge representation where concepts are hierarchically organized.
The evaluation consists of two parts, firstly we apply the constructed hierarchy on the object recognition task and then we compare our visual hierarchy and existing lexical hierarchies to show the validity of our method.
arXiv Detail & Related papers (2022-02-26T00:10:21Z) - Toward a Visual Concept Vocabulary for GAN Latent Space [74.12447538049537]
This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space.
Our approach is built from three components: automatic identification of perceptually salient directions based on their layer selectivity; human annotation of these directions with free-form, compositional natural language descriptions.
Experiments show that concepts learned with our approach are reliable and composable -- generalizing across classes, contexts, and observers.
arXiv Detail & Related papers (2021-10-08T17:58:19Z) - Classifying concepts via visual properties [5.1652563977194434]
We introduce a general methodology for building lexico-semantic hierarchies of substance concepts.
The key novelty is that the hierarchy is built exploiting the visual properties of substance concepts.
The validity of the approach is exemplified by providing some highlights of an ongoing project.
arXiv Detail & Related papers (2021-05-19T22:24:30Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Visual Concept-Metaconcept Learning [101.62725114966211]
We propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs.
Knowing that red and green describe the same property of objects, we generalize to the fact that cube and sphere also describe the same property of objects.
arXiv Detail & Related papers (2020-02-04T18:42:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.