What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared
Properties in Large Concept Vocabularies
- URL: http://arxiv.org/abs/2310.14793v1
- Date: Mon, 23 Oct 2023 10:53:25 GMT
- Title: What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared
Properties in Large Concept Vocabularies
- Authors: Amit Gajbhiye, Zied Bouraoui, Na Li, Usashi Chatterjee, Luis Espinosa
Anke, Steven Schockaert
- Abstract summary: Concepts play a central role in many applications.
Previous work has focused on distilling decontextualised concept embeddings from language models.
We propose a strategy for identifying what different concepts, from a potentially large concept vocabulary, have in common with others.
We then represent concepts in terms of the properties they share with the other concepts.
- Score: 33.879307754303746
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Concepts play a central role in many applications. This includes settings
where concepts have to be modelled in the absence of sentence context. Previous
work has therefore focused on distilling decontextualised concept embeddings
from language models. But concepts can be modelled from different perspectives,
whereas concept embeddings typically mostly capture taxonomic structure. To
address this issue, we propose a strategy for identifying what different
concepts, from a potentially large concept vocabulary, have in common with
others. We then represent concepts in terms of the properties they share with
the other concepts. To demonstrate the practical usefulness of this way of
modelling concepts, we consider the task of ultra-fine entity typing, which is
a challenging multi-label classification problem. We show that by augmenting
the label set with shared properties, we can improve the performance of the
state-of-the-art models for this task.
Related papers
- A Concept-Based Explainability Framework for Large Multimodal Models [52.37626977572413]
We propose a dictionary learning based approach, applied to the representation of tokens.
We show that these concepts are well semantically grounded in both vision and text.
We show that the extracted multimodal concepts are useful to interpret representations of test samples.
arXiv Detail & Related papers (2024-06-12T10:48:53Z) - Modelling Commonsense Commonalities with Multi-Facet Concept Embeddings [25.52752452574944]
Concept embeddings identify concepts which share some property of interest.
Standard embeddings reflect basic taxonomic categories, making them unsuitable for finding commonalities that refer to more specific aspects.
We show that this leads to embeddings which capture a more diverse range of commonsense properties, and consistently improves results in downstream tasks.
arXiv Detail & Related papers (2024-03-25T17:44:45Z) - Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models [60.80960965051388]
Adjectives and verbs are entangled with nouns (subject)
Lego disentangles concepts from their associated subjects using a simple yet effective Subject Separation step.
Lego-generated concepts were preferred over 70% of the time when compared to the baseline.
arXiv Detail & Related papers (2023-11-23T07:33:38Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic
descriptions, and Conceptual Relations [99.54048050189971]
We present a framework for learning new visual concepts quickly, guided by multiple naturally occurring data streams.
The learned concepts support downstream applications, such as answering questions by reasoning about unseen images.
We demonstrate the effectiveness of our model on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-03-30T19:45:00Z) - The Conceptual VAE [7.15767183672057]
We present a new model of concepts, based on the framework of variational autoencoders.
The model is inspired by, and closely related to, the Beta-VAE model of concepts.
We show how the model can be used as a concept classifier, and how it can be adapted to learn from fewer labels per instance.
arXiv Detail & Related papers (2022-03-21T17:27:28Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Separating Skills and Concepts for Novel Visual Question Answering [66.46070380927372]
Generalization to out-of-distribution data has been a problem for Visual Question Answering (VQA) models.
"Skills" are visual tasks, such as counting or attribute recognition, and are applied to "concepts" mentioned in the question.
We present a novel method for learning to compose skills and concepts that separates these two factors implicitly within a model.
arXiv Detail & Related papers (2021-07-19T18:55:10Z) - Classifying concepts via visual properties [5.1652563977194434]
We introduce a general methodology for building lexico-semantic hierarchies of substance concepts.
The key novelty is that the hierarchy is built exploiting the visual properties of substance concepts.
The validity of the approach is exemplified by providing some highlights of an ongoing project.
arXiv Detail & Related papers (2021-05-19T22:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.