Classifying concepts via visual properties
- URL: http://arxiv.org/abs/2105.09422v1
- Date: Wed, 19 May 2021 22:24:30 GMT
- Title: Classifying concepts via visual properties
- Authors: Fausto Giunchiglia and Mayukh Bagchi
- Abstract summary: We introduce a general methodology for building lexico-semantic hierarchies of substance concepts.
The key novelty is that the hierarchy is built exploiting the visual properties of substance concepts.
The validity of the approach is exemplified by providing some highlights of an ongoing project.
- Score: 5.1652563977194434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We assume that substances in the world are represented by two types of
concepts, namely substance concepts and classification concepts, the former
instrumental to (visual) perception, the latter to (language based)
classification. Based on this distinction, we introduce a general methodology
for building lexico-semantic hierarchies of substance concepts, where nodes are
annotated with the media, e.g.,videos or photos, from which substance concepts
are extracted, and are associated with the corresponding classification
concepts. The methodology is based on Ranganathan's original faceted approach,
contextualized to the problem of classifying substance concepts. The key
novelty is that the hierarchy is built exploiting the visual properties of
substance concepts, while the linguistically defined properties of
classification concepts are only used to describe substance concepts. The
validity of the approach is exemplified by providing some highlights of an
ongoing project whose goal is to build a large scale multimedia multilingual
concept hierarchy.
Related papers
- Domain Embeddings for Generating Complex Descriptions of Concepts in
Italian Language [65.268245109828]
We propose a Distributional Semantic resource enriched with linguistic and lexical information extracted from electronic dictionaries.
The resource comprises 21 domain-specific matrices, one comprehensive matrix, and a Graphical User Interface.
Our model facilitates the generation of reasoned semantic descriptions of concepts by selecting matrices directly associated with concrete conceptual knowledge.
arXiv Detail & Related papers (2024-02-26T15:04:35Z) - What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared
Properties in Large Concept Vocabularies [33.879307754303746]
Concepts play a central role in many applications.
Previous work has focused on distilling decontextualised concept embeddings from language models.
We propose a strategy for identifying what different concepts, from a potentially large concept vocabulary, have in common with others.
We then represent concepts in terms of the properties they share with the other concepts.
arXiv Detail & Related papers (2023-10-23T10:53:25Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - Analyzing Encoded Concepts in Transformer Language Models [21.76062029833023]
ConceptX analyses how latent concepts are encoded in representations learned within pre-trained language models.
It uses clustering to discover the encoded concepts and explains them by aligning with a large set of human-defined concepts.
arXiv Detail & Related papers (2022-06-27T13:32:10Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - Building a visual semantics aware object hierarchy [0.0]
We propose a novel unsupervised method to build visual semantics aware object hierarchy.
Our intuition in this paper comes from real-world knowledge representation where concepts are hierarchically organized.
The evaluation consists of two parts, firstly we apply the constructed hierarchy on the object recognition task and then we compare our visual hierarchy and existing lexical hierarchies to show the validity of our method.
arXiv Detail & Related papers (2022-02-26T00:10:21Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Object Recognition as Classification of Visual Properties [5.1652563977194434]
We present an object recognition process based on Ranganathan's four-phased faceted knowledge organization process.
We briefly introduce the ongoing project MultiMedia UKC, whose aim is to build an object recognition resource.
arXiv Detail & Related papers (2021-12-20T13:50:07Z) - Toward a Visual Concept Vocabulary for GAN Latent Space [74.12447538049537]
This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space.
Our approach is built from three components: automatic identification of perceptually salient directions based on their layer selectivity; human annotation of these directions with free-form, compositional natural language descriptions.
Experiments show that concepts learned with our approach are reliable and composable -- generalizing across classes, contexts, and observers.
arXiv Detail & Related papers (2021-10-08T17:58:19Z) - Towards Visual Semantics [17.1623244298824]
We study how humans build mental representations, i.e., concepts, of what they visually perceive.
In this paper we provide a theory and an algorithm which learns substance concepts which correspond to the concepts, that we call classification concepts.
The experiments, though preliminary, show that the algorithm manages to acquire the notions of Genus and Differentia with reasonable accuracy.
arXiv Detail & Related papers (2021-04-26T07:28:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.