How Furiously Can Colourless Green Ideas Sleep? Sentence Acceptability
in Context
- URL: http://arxiv.org/abs/2004.00881v1
- Date: Thu, 2 Apr 2020 08:58:44 GMT
- Title: How Furiously Can Colourless Green Ideas Sleep? Sentence Acceptability
in Context
- Authors: Jey Han Lau, Carlos S. Armendariz, Shalom Lappin, Matthew Purver,
Chang Shu
- Abstract summary: We compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context.
Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings.
In relevant contexts we observe a discourse coherence effect which uniformly raises acceptability.
- Score: 17.4919556893898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the influence of context on sentence acceptability. First we compare
the acceptability ratings of sentences judged in isolation, with a relevant
context, and with an irrelevant context. Our results show that context induces
a cognitive load for humans, which compresses the distribution of ratings.
Moreover, in relevant contexts we observe a discourse coherence effect which
uniformly raises acceptability. Next, we test unidirectional and bidirectional
language models in their ability to predict acceptability ratings. The
bidirectional models show very promising results, with the best model achieving
a new state-of-the-art for unsupervised acceptability prediction. The two sets
of experiments provide insights into the cognitive aspects of sentence
processing and central issues in the computational modelling of text and
discourse.
Related papers
- Quantifying the Plausibility of Context Reliance in Neural Machine
Translation [25.29330352252055]
We introduce Plausibility Evaluation of Context Reliance (PECoRe)
PECoRe is an end-to-end interpretability framework designed to quantify context usage in language models' generations.
We use pecore to quantify the plausibility of context-aware machine translation models.
arXiv Detail & Related papers (2023-10-02T13:26:43Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Evaluating context-invariance in unsupervised speech representations [15.67794428589585]
Current benchmarks do not measure context-invariance.
We develop a new version of the ZeroSpeech ABX benchmark that measures context-invariance.
We demonstrate that the context-independence of representations is predictive of the stability of word-level representations.
arXiv Detail & Related papers (2022-10-27T21:15:49Z) - Beyond Model Interpretability: On the Faithfulness and Adversarial
Robustness of Contrastive Textual Explanations [2.543865489517869]
This work motivates textual counterfactuals by laying the ground for a novel evaluation scheme inspired by the faithfulness of explanations.
Experiments on sentiment analysis data show that the connectedness of counterfactuals to their original counterparts is not obvious in both models.
arXiv Detail & Related papers (2022-10-17T09:50:02Z) - Lost in Context? On the Sense-wise Variance of Contextualized Word
Embeddings [11.475144702935568]
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
We find that word representations are position-biased, where the first words in different contexts tend to be more similar.
arXiv Detail & Related papers (2022-08-20T12:27:25Z) - Keywords and Instances: A Hierarchical Contrastive Learning Framework
Unifying Hybrid Granularities for Text Generation [59.01297461453444]
We propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text.
Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
arXiv Detail & Related papers (2022-05-26T13:26:03Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Naturalistic Causal Probing for Morpho-Syntax [76.83735391276547]
We suggest a naturalistic strategy for input-level intervention on real world data in Spanish.
Using our approach, we isolate morpho-syntactic features from counfounders in sentences.
We apply this methodology to analyze causal effects of gender and number on contextualized representations extracted from pre-trained models.
arXiv Detail & Related papers (2022-05-14T11:47:58Z) - Did they answer? Subjective acts and intents in conversational discourse [48.63528550837949]
We present the first discourse dataset with multiple and subjective interpretations of English conversation.
We show disagreements are nuanced and require a deeper understanding of the different contextual factors.
arXiv Detail & Related papers (2021-04-09T16:34:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.