Paraphrasing, textual entailment, and semantic similarity above word
level
- URL: http://arxiv.org/abs/2208.05387v1
- Date: Wed, 10 Aug 2022 15:07:49 GMT
- Title: Paraphrasing, textual entailment, and semantic similarity above word
level
- Authors: Venelin Kovatchev
- Abstract summary: dissertation explores the linguistic and computational aspects of the meaning relations that can hold between two or more complex linguistic expressions.
In particular, it focuses on Paraphrasing, Textual Entailment, Contradiction, and Semantic Similarity.
- Score: 2.411299055446423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This dissertation explores the linguistic and computational aspects of the
meaning relations that can hold between two or more complex linguistic
expressions (phrases, clauses, sentences, paragraphs). In particular, it
focuses on Paraphrasing, Textual Entailment, Contradiction, and Semantic
Similarity.
In Part I: "Similarity at the Level of Words and Phrases", I study the
Distributional Hypothesis (DH) and explore several different methodologies for
quantifying semantic similarity at the levels of words and short phrases.
In Part II: "Paraphrase Typology and Paraphrase Identification", I focus on
the meaning relation of paraphrasing and the empirical task of automated
Paraphrase Identification (PI).
In Part III: "Paraphrasing, Textual Entailment, and Semantic Similarity", I
present a novel direction in the research on textual meaning relations,
resulting from joint research carried out on on paraphrasing, textual
entailment, contradiction, and semantic similarity.
Related papers
- Task-Oriented Paraphrase Analytics [34.95500212742163]
Since paraphrasing is an ill-defined task, the term "paraphrasing" covers text transformation tasks with different characteristics.
We propose a taxonomy to organize the 25identified paraphrasing (sub-)tasks.
arXiv Detail & Related papers (2024-03-26T10:14:12Z) - Neighboring Words Affect Human Interpretation of Saliency Explanations [65.29015910991261]
Word-level saliency explanations are often used to communicate feature-attribution in text-based models.
Recent studies found that superficial factors such as word length can distort human interpretation of the communicated saliency scores.
We investigate how the marking of a word's neighboring words affect the explainee's perception of the word's importance in the context of a saliency explanation.
arXiv Detail & Related papers (2023-05-04T09:50:25Z) - PropSegmEnt: A Large-Scale Corpus for Proposition-Level Segmentation and
Entailment Recognition [63.51569687229681]
We argue for the need to recognize the textual entailment relation of each proposition in a sentence individually.
We propose PropSegmEnt, a corpus of over 45K propositions annotated by expert human raters.
Our dataset structure resembles the tasks of (1) segmenting sentences within a document to the set of propositions, and (2) classifying the entailment relation of each proposition with respect to a different yet topically-aligned document.
arXiv Detail & Related papers (2022-12-21T04:03:33Z) - Textual Entailment Recognition with Semantic Features from Empirical
Text Representation [60.31047947815282]
A text entails a hypothesis if and only if the true value of the hypothesis follows the text.
In this paper, we propose a novel approach to identifying the textual entailment relationship between text and hypothesis.
We employ an element-wise Manhattan distance vector-based feature that can identify the semantic entailment relationship between the text-hypothesis pair.
arXiv Detail & Related papers (2022-10-18T10:03:51Z) - Lost in Context? On the Sense-wise Variance of Contextualized Word
Embeddings [11.475144702935568]
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
We find that word representations are position-biased, where the first words in different contexts tend to be more similar.
arXiv Detail & Related papers (2022-08-20T12:27:25Z) - The Causal Structure of Semantic Ambiguities [0.0]
We identify two features: (1) joint plausibility degrees of different possible interpretations, and (2) causal structures according to which certain words play a more substantial role in the processes.
We applied this theory to a dataset of ambiguous phrases extracted from Psycholinguistics literature and their human plausibility collected by us.
arXiv Detail & Related papers (2022-06-14T12:56:34Z) - An Informational Space Based Semantic Analysis for Scientific Texts [62.997667081978825]
This paper introduces computational methods for semantic analysis and the quantifying the meaning of short scientific texts.
The representation of scientific-specific meaning is standardised by replacing the situation representations, rather than psychological properties.
The research in this paper conducts the base for the geometric representation of the meaning of texts.
arXiv Detail & Related papers (2022-05-31T11:19:32Z) - Keywords and Instances: A Hierarchical Contrastive Learning Framework
Unifying Hybrid Granularities for Text Generation [59.01297461453444]
We propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text.
Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
arXiv Detail & Related papers (2022-05-26T13:26:03Z) - Patterns of Lexical Ambiguity in Contextualised Language Models [9.747449805791092]
We introduce an extended, human-annotated dataset of graded word sense similarity and co-predication.
Both types of human judgements indicate that the similarity of polysemic interpretations falls in a continuum between identity of meaning and homonymy.
Our dataset appears to capture a substantial part of the complexity of lexical ambiguity, and can provide a realistic test bed for contextualised embeddings.
arXiv Detail & Related papers (2021-09-27T13:11:44Z) - Exploring the Representation of Word Meanings in Context: A Case Study
on Homonymy and Synonymy [0.0]
We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations.
Experiments are performed in Galician, Portuguese, English, and Spanish.
arXiv Detail & Related papers (2021-06-25T10:54:23Z) - A computational model implementing subjectivity with the 'Room Theory'.
The case of detecting Emotion from Text [68.8204255655161]
This work introduces a new method to consider subjectivity and general context dependency in text analysis.
By using similarity measure between words, we are able to extract the relative relevance of the elements in the benchmark.
This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text.
arXiv Detail & Related papers (2020-05-12T21:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.