Resolving Indirect Referring Expressions for Entity Selection
- URL: http://arxiv.org/abs/2212.10933v2
- Date: Fri, 26 May 2023 20:17:15 GMT
- Title: Resolving Indirect Referring Expressions for Entity Selection
- Authors: Mohammad Javad Hosseini, Filip Radlinski, Silvia Pareti, Annie Louis
- Abstract summary: We address the problem of reference resolution when people use natural expressions to choose between entities.
We argue that robustly understanding such language has large potential for improving naturalness in dialog, recommendation, and search systems.
We create AltEntities (Alternative Entities), a new public dataset of 42K entity pairs and expressions (referring to one entity in the pair)
- Score: 7.7267313512902644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in language modeling have enabled new conversational systems.
In particular, it is often desirable for people to make choices among specified
options when using such systems. We address this problem of reference
resolution, when people use natural expressions to choose between the entities.
For example, given the choice `Should we make a Simnel cake or a Pandan cake?'
a natural response from a dialog participant may be indirect: `let's make the
green one'. Such natural expressions have been little studied for reference
resolution. We argue that robustly understanding such language has large
potential for improving naturalness in dialog, recommendation, and search
systems. We create AltEntities (Alternative Entities), a new public dataset of
42K entity pairs and expressions (referring to one entity in the pair), and
develop models for the disambiguation problem. Consisting of indirect referring
expressions across three domains, our corpus enables for the first time the
study of how language models can be adapted to this task. We find they achieve
82%-87% accuracy in realistic settings, which while reasonable also invites
further advances.
Related papers
- OLaLa: Ontology Matching with Large Language Models [2.211868306499727]
Ontology Matching is a challenging task where information in natural language is one of the most important signals to process.
With the rise of Large Language Models, it is possible to incorporate this knowledge in a better way into the matching pipeline.
We show that with only a handful of examples and a well-designed prompt, it is possible to achieve results that are en par with supervised matching systems.
arXiv Detail & Related papers (2023-11-07T09:34:20Z) - Light Coreference Resolution for Russian with Hierarchical Discourse
Features [0.0]
We propose a new approach that incorporates rhetorical information into neural coreference resolution models.
We implement an end-to-end span-based coreference resolver using a partially fine-tuned multilingual entity-aware language model LUKE.
Our best model employing rhetorical distance between mentions has ranked 1st on the development set (74.6% F1) and 2nd on the test set (73.3% F1) of the Shared Task.
arXiv Detail & Related papers (2023-06-02T11:41:24Z) - ABINet++: Autonomous, Bidirectional and Iterative Language Modeling for
Scene Text Spotting [121.11880210592497]
We argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input.
We propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting.
arXiv Detail & Related papers (2022-11-19T03:50:33Z) - The Whole Truth and Nothing But the Truth: Faithful and Controllable
Dialogue Response Generation with Dataflow Transduction and Constrained
Decoding [65.34601470417967]
We describe a hybrid architecture for dialogue response generation that combines the strengths of neural language modeling and rule-based generation.
Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
arXiv Detail & Related papers (2022-09-16T09:00:49Z) - Multilingual Syntax-aware Language Modeling through Dependency Tree
Conversion [12.758523394180695]
We study the effect on neural language models (LMs) performance across nine conversion methods and five languages.
On average, the performance of our best model represents a 19 % increase in accuracy over the worst choice across all languages.
Our experiments highlight the importance of choosing the right tree formalism, and provide insights into making an informed decision.
arXiv Detail & Related papers (2022-04-19T03:56:28Z) - Read Like Humans: Autonomous, Bidirectional and Iterative Language
Modeling for Scene Text Recognition [80.446770909975]
Linguistic knowledge is of great benefit to scene text recognition.
How to effectively model linguistic rules in end-to-end deep networks remains a research challenge.
We propose an autonomous, bidirectional and iterative ABINet for scene text recognition.
arXiv Detail & Related papers (2021-03-11T06:47:45Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - A Simple Joint Model for Improved Contextual Neural Lemmatization [60.802451210656805]
We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages.
Our paper describes the model in addition to training and decoding procedures.
arXiv Detail & Related papers (2019-04-04T02:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.