Representing Inferences and their Lexicalization
- URL: http://arxiv.org/abs/2112.07711v1
- Date: Tue, 14 Dec 2021 19:23:43 GMT
- Title: Representing Inferences and their Lexicalization
- Authors: David McDonald, James Pustejovsky
- Abstract summary: The meaning of a word is taken to be the entities, predications, presuppositions, and potential inferences that it adds to an ongoing situation.
As words compose, the minimal model in the situation evolves to limit and direct inference.
- Score: 7.081604594416339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We have recently begun a project to develop a more effective and efficient
way to marshal inferences from background knowledge to facilitate deep natural
language understanding. The meaning of a word is taken to be the entities,
predications, presuppositions, and potential inferences that it adds to an
ongoing situation. As words compose, the minimal model in the situation evolves
to limit and direct inference. At this point we have developed our
computational architecture and implemented it on real text. Our focus has been
on proving the feasibility of our design.
Related papers
- Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity [0.0]
Large Language Models (LLMs) are increasingly sophisticated and ubiquitous in natural language processing (NLP) applications.
This paper presents a novel framework for contextual grounding in textual models, with a particular emphasis on the Context Representation stage.
Our findings have significant implications for the deployment of LLMs in sensitive domains such as healthcare, legal systems, and social services.
arXiv Detail & Related papers (2024-08-07T18:12:02Z) - Predictive Simultaneous Interpretation: Harnessing Large Language Models for Democratizing Real-Time Multilingual Communication [0.0]
We present a novel algorithm that generates real-time translations by predicting speaker utterances and expanding multiple possibilities in a tree-like structure.
Our theoretical analysis, supported by illustrative examples, suggests that this approach could lead to more natural and fluent translations with minimal latency.
arXiv Detail & Related papers (2024-07-02T13:18:28Z) - Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic [51.967603572656266]
We introduce a consistent and theoretically grounded approach to annotating decompositional entailment.
We find that our new dataset, RDTE, has a substantially higher internal consistency (+9%) than prior decompositional entailment datasets.
We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in an entailment tree reasoning engine significantly improves both accuracy and proof quality.
arXiv Detail & Related papers (2024-02-22T18:55:17Z) - Punctuation Restoration Improves Structure Understanding without
Supervision [6.4736137270915215]
We show that punctuation restoration as a learning objective improves in- and out-of-distribution performance on structure-related tasks.
Punctuation restoration is an effective learning objective that can improve structure understanding and yield a more robust structure-aware representations of natural language.
arXiv Detail & Related papers (2024-02-13T11:22:52Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Language with Vision: a Study on Grounded Word and Sentence Embeddings [6.231247903840833]
Grounding language in vision is an active field of research seeking to construct cognitively plausible word and sentence representations.
The present study proposes a computational grounding model for pre-trained word embeddings.
Our model effectively balances the interplay between language and vision by aligning textual embeddings with visual information.
arXiv Detail & Related papers (2022-06-17T15:04:05Z) - Dependency Induction Through the Lens of Visual Perception [81.91502968815746]
We propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based to jointly learn constituency-structure and dependency-structure grammars.
Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size.
arXiv Detail & Related papers (2021-09-20T18:40:37Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - (Re)construing Meaning in NLP [15.37817898307963]
We show that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed.
We show how insights from construal could inform theoretical and practical work in NLP.
arXiv Detail & Related papers (2020-05-18T21:21:34Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.