Did they answer? Subjective acts and intents in conversational discourse
- URL: http://arxiv.org/abs/2104.04470v1
- Date: Fri, 9 Apr 2021 16:34:19 GMT
- Title: Did they answer? Subjective acts and intents in conversational discourse
- Authors: Elisa Ferracane, Greg Durrett, Junyi Jessy Li and Katrin Erk
- Abstract summary: We present the first discourse dataset with multiple and subjective interpretations of English conversation.
We show disagreements are nuanced and require a deeper understanding of the different contextual factors.
- Score: 48.63528550837949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discourse signals are often implicit, leaving it up to the interpreter to
draw the required inferences. At the same time, discourse is embedded in a
social context, meaning that interpreters apply their own assumptions and
beliefs when resolving these inferences, leading to multiple, valid
interpretations. However, current discourse data and frameworks ignore the
social aspect, expecting only a single ground truth. We present the first
discourse dataset with multiple and subjective interpretations of English
conversation in the form of perceived conversation acts and intents. We
carefully analyze our dataset and create computational models to (1) confirm
our hypothesis that taking into account the bias of the interpreters leads to
better predictions of the interpretations, (2) and show disagreements are
nuanced and require a deeper understanding of the different contextual factors.
We share our dataset and code at http://github.com/elisaF/subjective_discourse.
Related papers
- Interpretation modeling: Social grounding of sentences by reasoning over
their implicit moral judgments [24.133419857271505]
Single gold-standard interpretations rarely exist, challenging conventional assumptions in natural language processing.
This work introduces the interpretation modeling (IM) task which involves modeling several interpretations of a sentence's underlying semantics.
A first-of-its-kind IM dataset is curated to support experiments and analyses.
arXiv Detail & Related papers (2023-11-27T07:50:55Z) - Contrastive Learning for Inference in Dialogue [56.20733835058695]
Inference, especially those derived from inductive processes, is a crucial component in our conversation.
Recent large language models show remarkable advances in inference tasks.
But their performance in inductive reasoning, where not all information is present in the context, is far behind deductive reasoning.
arXiv Detail & Related papers (2023-10-19T04:49:36Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Construction and Evaluation of a Self-Attention Model for Semantic
Understanding of Sentence-Final Particles [0.0]
Sentence-final particles serve an essential role in spoken Japanese because they express the speaker's mental attitudes toward a proposition and/or an interlocutor.
This paper proposes a self-attention model that takes various subjective senses in addition to language and images as input and learns the relationship between words and subjective senses.
arXiv Detail & Related papers (2022-10-01T13:54:54Z) - NOPE: A Corpus of Naturally-Occurring Presuppositions in English [33.69537711677911]
We introduce the Naturally-Occurring Presuppositions in English (NOPE) Corpus.
We investigate the context-sensitivity of 10 different types of presupposition triggers.
We evaluate machine learning models' ability to predict human inferences.
arXiv Detail & Related papers (2021-09-14T22:03:23Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z) - Who Responded to Whom: The Joint Effects of Latent Topics and Discourse
in Conversation Structure [53.77234444565652]
We identify the responding relations in the conversation discourse, which link response utterances to their initiations.
We propose a model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links.
Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.
arXiv Detail & Related papers (2021-04-17T17:46:00Z) - An Information-theoretic Progressive Framework for Interpretation [0.0]
This paper proposes an information-theoretic progressive framework to synthesize interpretation.
We build the framework with an information map splitting idea and implement it with the variational information bottleneck technique.
The framework is shown to be able to split information maps and synthesize interpretation in the form of meta-information.
arXiv Detail & Related papers (2021-01-08T06:59:48Z) - How Furiously Can Colourless Green Ideas Sleep? Sentence Acceptability
in Context [17.4919556893898]
We compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context.
Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings.
In relevant contexts we observe a discourse coherence effect which uniformly raises acceptability.
arXiv Detail & Related papers (2020-04-02T08:58:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.