Composing or Not Composing? Towards Distributional Construction Grammars
- URL: http://arxiv.org/abs/2412.07419v1
- Date: Tue, 10 Dec 2024 11:17:02 GMT
- Title: Composing or Not Composing? Towards Distributional Construction Grammars
- Authors: Philippe Blache, Emmanuele Chersoni, Giulia Rambelli, Alessandro Lenci,
- Abstract summary: Building the meaning of a linguistic utterance is incremental, step-by-step, based on a compositional process.
It is therefore necessary to propose a framework bringing together both approaches.
We present an approach based on Construction Grammars and completing this framework in order to account for these different mechanisms.
- Score: 47.636049672406145
- License:
- Abstract: The mechanisms of comprehension during language processing remains an open question. Classically, building the meaning of a linguistic utterance is said to be incremental, step-by-step, based on a compositional process. However, many different works have shown for a long time that non-compositional phenomena are also at work. It is therefore necessary to propose a framework bringing together both approaches. We present in this paper an approach based on Construction Grammars and completing this framework in order to account for these different mechanisms. We propose first a formal definition of this framework by completing the feature structure representation proposed in Sign-Based Construction Grammars. In a second step, we present a general representation of the meaning based on the interaction of constructions, frames and events. This framework opens the door to a processing mechanism for building the meaning based on the notion of activation evaluated in terms of similarity and unification. This new approach integrates features from distributional semantics into the constructionist framework, leading to what we call Distributional Construction Grammars.
Related papers
- Unsupervised Mutual Learning of Discourse Parsing and Topic Segmentation in Dialogue [37.618612723025784]
In dialogue systems, discourse plays a crucial role in managing conversational focus and coordinating interactions.
It consists of two key structures: rhetorical structure and topic structure.
We introduce a unified representation that integrates rhetorical and topic structures, ensuring semantic consistency between them.
We propose an unsupervised mutual learning framework (UMLF) that jointly models rhetorical and topic structures, allowing them to mutually reinforce each other without requiring additional annotations.
arXiv Detail & Related papers (2024-05-30T08:10:50Z) - Revisiting Conversation Discourse for Dialogue Disentanglement [88.3386821205896]
We propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics.
We develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context.
Our work has great potential to facilitate broader multi-party multi-thread dialogue applications.
arXiv Detail & Related papers (2023-06-06T19:17:47Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - A General Framework for the Representation of Function and Affordance: A
Cognitive, Causal, and Grounded Approach, and a Step Toward AGI [5.609443065827994]
A general framework dealing with functionality would represent a major step toward achieving Artificial General Intelligence.
The framework is developed based on an extension of the general language meaning representational framework called conceptual dependency.
arXiv Detail & Related papers (2022-06-02T08:25:55Z) - Towards Unification of Discourse Annotation Frameworks [0.0]
We will investigate the systematic relations between different frameworks and devise methods of unifying the frameworks.
Although the issue of framework unification has been a topic of discussion for a long time, there is currently no comprehensive approach.
We plan to use automatic means for the unification task and evaluate the result with structural complexity and downstream tasks.
arXiv Detail & Related papers (2022-04-16T11:34:00Z) - Compositional Generalization Requires Compositional Parsers [69.77216620997305]
We compare sequence-to-sequence models and models guided by compositional principles on the recent COGS corpus.
We show structural generalization is a key measure of compositional generalization and requires models that are aware of complex structure.
arXiv Detail & Related papers (2022-02-24T07:36:35Z) - Transition-based Bubble Parsing: Improvements on Coordination Structure
Prediction [18.71574180551552]
We introduce a transition system and neural models for parsing bubble-enhanced structures.
Experimental results on the English Penn Treebank and the English GENIA corpus show that ours beat previous state-of-the-art approaches on the task of coordination structure prediction.
arXiv Detail & Related papers (2021-07-14T18:00:05Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - Structured Attention for Unsupervised Dialogue Structure Induction [110.12561786644122]
We propose to incorporate structured attention layers into a Variational Recurrent Neural Network (VRNN) model with discrete latent states to learn dialogue structure in an unsupervised fashion.
Compared to a vanilla VRNN, structured attention enables a model to focus on different parts of the source sentence embeddings while enforcing a structural inductive bias.
arXiv Detail & Related papers (2020-09-17T23:07:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.