VER: Unifying Verbalizing Entities and Relations
- URL: http://arxiv.org/abs/2211.11093v3
- Date: Mon, 23 Oct 2023 03:09:04 GMT
- Title: VER: Unifying Verbalizing Entities and Relations
- Authors: Jie Huang, Kevin Chen-Chuan Chang
- Abstract summary: We propose VER: a unified model for Verbalizing Entities and Relations.
In this paper, we attempt to build a system that takes any entity or entity set as input and generates a sentence to represent entities and relations.
- Score: 30.327166864967918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entities and relationships between entities are vital in the real world.
Essentially, we understand the world by understanding entities and relations.
For instance, to understand a field, e.g., computer science, we need to
understand the relevant concepts, e.g., machine learning, and the relationships
between concepts, e.g., machine learning and artificial intelligence. To
understand a person, we should first know who he/she is and how he/she is
related to others. To understand entities and relations, humans may refer to
natural language descriptions. For instance, when learning a new scientific
term, people usually start by reading its definition in dictionaries or
encyclopedias. To know the relationship between two entities, humans tend to
create a sentence to connect them. In this paper, we propose VER: a unified
model for Verbalizing Entities and Relations. Specifically, we attempt to build
a system that takes any entity or entity set as input and generates a sentence
to represent entities and relations. Extensive experiments demonstrate that our
model can generate high-quality sentences describing entities and entity
relationships and facilitate various tasks on entities and relations, including
definition modeling, relation modeling, and generative commonsense reasoning.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Neural Approaches to Entity-Centric Information Extraction [2.8935588665357077]
We introduce a radically different, entity-centric view of the information in text.
We argue that instead of using individual mentions in text to understand their meaning, we should build applications that would work in terms of entity concepts.
In our work, we show that this task can be improved by considering performing entity linking at the coreference cluster level rather than each of the mentions individually.
arXiv Detail & Related papers (2023-04-15T20:07:37Z) - Learning to Compose Visual Relations [100.45138490076866]
We propose to represent each relation as an unnormalized density (an energy-based model)
We show that such a factorized decomposition allows the model to both generate and edit scenes with multiple sets of relations more faithfully.
arXiv Detail & Related papers (2021-11-17T18:51:29Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z) - Relation/Entity-Centric Reading Comprehension [1.0965065178451106]
We study reading comprehension with a focus on understanding entities and their relationships.
We focus on entities and relations because they are typically used to represent the semantics of natural language.
arXiv Detail & Related papers (2020-08-27T06:42:18Z) - A model of interaction semantics [0.0]
I structure the model of interaction semantics similar to the semantics of a formal language.
I arrive at a model of interaction semantics which, in the sense of the late Ludwig Wittgenstein, can do without a'mental' mapping from characters to concepts.
arXiv Detail & Related papers (2020-07-13T09:22:59Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.