Modelling Commonsense Commonalities with Multi-Facet Concept Embeddings
- URL: http://arxiv.org/abs/2403.16984v2
- Date: Tue, 4 Jun 2024 21:36:42 GMT
- Title: Modelling Commonsense Commonalities with Multi-Facet Concept Embeddings
- Authors: Hanane Kteich, Na Li, Usashi Chatterjee, Zied Bouraoui, Steven Schockaert,
- Abstract summary: Concept embeddings identify concepts which share some property of interest.
Standard embeddings reflect basic taxonomic categories, making them unsuitable for finding commonalities that refer to more specific aspects.
We show that this leads to embeddings which capture a more diverse range of commonsense properties, and consistently improves results in downstream tasks.
- Score: 25.52752452574944
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concept embeddings offer a practical and efficient mechanism for injecting commonsense knowledge into downstream tasks. Their core purpose is often not to predict the commonsense properties of concepts themselves, but rather to identify commonalities, i.e.\ sets of concepts which share some property of interest. Such commonalities are the basis for inductive generalisation, hence high-quality concept embeddings can make learning easier and more robust. Unfortunately, standard embeddings primarily reflect basic taxonomic categories, making them unsuitable for finding commonalities that refer to more specific aspects (e.g.\ the colour of objects or the materials they are made of). In this paper, we address this limitation by explicitly modelling the different facets of interest when learning concept embeddings. We show that this leads to embeddings which capture a more diverse range of commonsense properties, and consistently improves results in downstream tasks such as ultra-fine entity typing and ontology completion.
Related papers
- What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared
Properties in Large Concept Vocabularies [33.879307754303746]
Concepts play a central role in many applications.
Previous work has focused on distilling decontextualised concept embeddings from language models.
We propose a strategy for identifying what different concepts, from a potentially large concept vocabulary, have in common with others.
We then represent concepts in terms of the properties they share with the other concepts.
arXiv Detail & Related papers (2023-10-23T10:53:25Z) - Provable Compositional Generalization for Object-Centric Learning [55.658215686626484]
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.
We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally.
arXiv Detail & Related papers (2023-10-09T01:18:07Z) - CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning
Capabilities of Natural Language Models [30.63276809199399]
We present CommonsenseVIS, a visual explanatory system that utilizes external commonsense knowledge bases to contextualize model behavior for commonsense question-answering.
Our system features multi-level visualization and interactive model probing and editing for different concepts and their underlying relations.
arXiv Detail & Related papers (2023-07-23T17:16:13Z) - Vector-based Representation is the Key: A Study on Disentanglement and
Compositional Generalization [77.57425909520167]
We show that it is possible to achieve both good concept recognition and novel concept composition.
We propose a method to reform the scalar-based disentanglement works to be vector-based to increase both capabilities.
arXiv Detail & Related papers (2023-05-29T13:05:15Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Is Disentanglement all you need? Comparing Concept-based &
Disentanglement Approaches [24.786152654589067]
We give an overview of concept-based explanations and disentanglement approaches.
We show that state-of-the-art approaches from both classes can be data inefficient, sensitive to the specific nature of the classification/regression task, or sensitive to the employed concept representation.
arXiv Detail & Related papers (2021-04-14T15:06:34Z) - Concepts, Properties and an Approach for Compositional Generalization [2.0559497209595823]
This report connects a series of our work for compositional generalization, and summarizes an approach.
The approach uses architecture design and regularization to regulate information of representations.
We hope this work would be helpful to clarify fundamentals of compositional generalization and lead to advance artificial intelligence.
arXiv Detail & Related papers (2021-02-08T14:22:30Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.