Construction Repetition Reduces Information Rate in Dialogue
- URL: http://arxiv.org/abs/2210.08321v1
- Date: Sat, 15 Oct 2022 15:44:00 GMT
- Title: Construction Repetition Reduces Information Rate in Dialogue
- Authors: Mario Giulianelli, Arabella Sinclair, Raquel Fern\'andez
- Abstract summary: We study the repetition of lexicalised constructions in English open-domain spoken dialogues.
We observe that construction usage lowers the information content of utterances.
- Score: 2.1104930506758275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Speakers repeat constructions frequently in dialogue. Due to their peculiar
information-theoretic properties, repetitions can be thought of as a strategy
for cost-effective communication. In this study, we focus on the repetition of
lexicalised constructions -- i.e., recurring multi-word units -- in English
open-domain spoken dialogues. We hypothesise that speakers use construction
repetition to mitigate information rate, leading to an overall decrease in
utterance information content over the course of a dialogue. We conduct a
quantitative analysis, measuring the information content of constructions and
that of their containing utterances, estimating information content with an
adaptive neural language model. We observe that construction usage lowers the
information content of utterances. This facilitating effect (i) increases
throughout dialogues, (ii) is boosted by repetition, (iii) grows as a function
of repetition frequency and density, and (iv) is stronger for repetitions of
referential constructions.
Related papers
- SPECTRUM: Speaker-Enhanced Pre-Training for Long Dialogue Summarization [48.284512017469524]
Multi-turn dialogues are characterized by their extended length and the presence of turn-taking conversations.
Traditional language models often overlook the distinct features of these dialogues by treating them as regular text.
We propose a speaker-enhanced pre-training method for long dialogue summarization.
arXiv Detail & Related papers (2024-01-31T04:50:00Z) - Enhancing Argument Structure Extraction with Efficient Leverage of
Contextual Information [79.06082391992545]
We propose an Efficient Context-aware model (ECASE) that fully exploits contextual information.
We introduce a sequence-attention module and distance-weighted similarity loss to aggregate contextual information and argumentative information.
Our experiments on five datasets from various domains demonstrate that our model achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-08T08:47:10Z) - Improving Speaker Diarization using Semantic Information: Joint Pairwise
Constraints Propagation [53.01238689626378]
We propose a novel approach to leverage semantic information in speaker diarization systems.
We introduce spoken language understanding modules to extract speaker-related semantic information.
We present a novel framework to integrate these constraints into the speaker diarization pipeline.
arXiv Detail & Related papers (2023-09-19T09:13:30Z) - Revisiting Conversation Discourse for Dialogue Disentanglement [88.3386821205896]
We propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics.
We develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context.
Our work has great potential to facilitate broader multi-party multi-thread dialogue applications.
arXiv Detail & Related papers (2023-06-06T19:17:47Z) - Learning to Memorize Entailment and Discourse Relations for
Persona-Consistent Dialogues [8.652711997920463]
Existing works have improved the performance of dialogue systems by intentionally learning interlocutor personas with sophisticated network structures.
This study proposes a method of learning to memorize entailment and discourse relations for persona-consistent dialogue tasks.
arXiv Detail & Related papers (2023-01-12T08:37:00Z) - Who says like a style of Vitamin: Towards Syntax-Aware
DialogueSummarization using Multi-task Learning [2.251583286448503]
We focus on the association between utterances from individual speakers and unique syntactic structures.
Speakers have unique textual styles that can contain linguistic information, such as voiceprint.
We employ multi-task learning of both syntax-aware information and dialogue summarization.
arXiv Detail & Related papers (2021-09-29T05:30:39Z) - Speaker-Oriented Latent Structures for Dialogue-Based Relation
Extraction [10.381257436462116]
We introduce SOLS, a novel model which can explicitly induce speaker-oriented latent structures for better DiaRE.
Specifically, we learn latent structures to capture the relationships among tokens beyond the utterance boundaries.
During the learning process, our speaker-specific regularization method progressively highlights speaker-related key clues and erases the irrelevant ones.
arXiv Detail & Related papers (2021-09-11T04:24:51Z) - Structured Attention for Unsupervised Dialogue Structure Induction [110.12561786644122]
We propose to incorporate structured attention layers into a Variational Recurrent Neural Network (VRNN) model with discrete latent states to learn dialogue structure in an unsupervised fashion.
Compared to a vanilla VRNN, structured attention enables a model to focus on different parts of the source sentence embeddings while enforcing a structural inductive bias.
arXiv Detail & Related papers (2020-09-17T23:07:03Z) - Ranking Enhanced Dialogue Generation [77.8321855074999]
How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation.
Previous works usually employ various neural network architectures to model the history.
This paper proposes a Ranking Enhanced Dialogue generation framework.
arXiv Detail & Related papers (2020-08-13T01:49:56Z) - Topic Propagation in Conversational Search [0.0]
In a conversational context, a user expresses her multi-faceted information need as a sequence of natural-language questions.
We adopt the 2019 TREC Conversational Assistant Track (CAsT) framework to experiment with a modular architecture performing: (i) topic-aware utterance rewriting, (ii) retrieval of candidate passages for the rewritten utterances, and (iii) neural-based re-ranking of candidate passages.
arXiv Detail & Related papers (2020-04-29T10:06:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.