Predicting Text Preference Via Structured Comparative Reasoning
- URL: http://arxiv.org/abs/2311.08390v2
- Date: Mon, 1 Jul 2024 16:57:56 GMT
- Title: Predicting Text Preference Via Structured Comparative Reasoning
- Authors: Jing Nathan Yan, Tianqi Liu, Justin T Chiu, Jiaming Shen, Zhen Qin, Yue Yu, Yao Zhao, Charu Lakshmanan, Yair Kurzion, Alexander M. Rush, Jialu Liu, Michael Bendersky,
- Abstract summary: We introduce SC, a prompting approach that predicts text preferences by generating structured intermediate comparisons.
We select consistent comparisons with a pairwise consistency comparator that ensures each aspect's comparisons clearly distinguish differences between texts.
Our comprehensive evaluations across various NLP tasks, including summarization, retrieval, and automatic rating, demonstrate that SC equips LLMs to achieve state-of-the-art performance in text preference prediction.
- Score: 110.49560164568791
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Comparative reasoning plays a crucial role in text preference prediction; however, large language models (LLMs) often demonstrate inconsistencies in their reasoning. While approaches like Chain-of-Thought improve accuracy in many other settings, they struggle to consistently distinguish the similarities and differences of complex texts. We introduce SC, a prompting approach that predicts text preferences by generating structured intermediate comparisons. SC begins by proposing aspects of comparison, followed by generating textual comparisons under each aspect. We select consistent comparisons with a pairwise consistency comparator that ensures each aspect's comparisons clearly distinguish differences between texts, significantly reducing hallucination and improving consistency. Our comprehensive evaluations across various NLP tasks, including summarization, retrieval, and automatic rating, demonstrate that SC equips LLMs to achieve state-of-the-art performance in text preference prediction.
Related papers
- Composition-contrastive Learning for Sentence Embeddings [23.85590618900386]
This work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters.
Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches.
arXiv Detail & Related papers (2023-07-14T14:39:35Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Towards Structure-aware Paraphrase Identification with Phrase Alignment
Using Sentence Encoders [4.254099382808598]
We propose to combine sentence encoders with an alignment component by representing each sentence as a list of predicate-argument spans.
Empirical results show that the alignment component brings in both improved performance and interpretability for various sentence encoders.
arXiv Detail & Related papers (2022-10-11T09:52:52Z) - Interpreting BERT-based Text Similarity via Activation and Saliency Maps [26.279593839644836]
We present an unsupervised technique for explaining paragraph similarities inferred by pre-trained BERT models.
By looking at a pair of paragraphs, our technique identifies important words that dictate each paragraph's semantics, matches between the words in both paragraphs, and retrieves the most important pairs that explain the similarity between the two.
arXiv Detail & Related papers (2022-08-13T10:06:24Z) - Classifiers are Better Experts for Controllable Text Generation [63.17266060165098]
We show that the proposed method significantly outperforms recent PPLM, GeDi, and DExperts on PPL and sentiment accuracy based on the external classifier of generated texts.
The same time, it is also easier to implement and tune, and has significantly fewer restrictions and requirements.
arXiv Detail & Related papers (2022-05-15T12:58:35Z) - Toward Interpretable Semantic Textual Similarity via Optimal
Transport-based Contrastive Sentence Learning [29.462788855992617]
We describe the sentence distance as the weighted sum of contextualized token distances on the basis of a transportation problem.
We then present the optimal transport-based distance measure, named RCMD; it identifies and leverages semantically-aligned token pairs.
In the end, we propose CLRCMD, a contrastive learning framework that optimize RCMD of sentence pairs.
arXiv Detail & Related papers (2022-02-26T17:28:02Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Comprehensive Studies for Arbitrary-shape Scene Text Detection [78.50639779134944]
We propose a unified framework for the bottom-up based scene text detection methods.
Under the unified framework, we ensure the consistent settings for non-core modules.
With the comprehensive investigations and elaborate analyses, it reveals the advantages and disadvantages of previous models.
arXiv Detail & Related papers (2021-07-25T13:18:55Z) - Multilingual Alignment of Contextual Word Representations [49.42244463346612]
BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model.
We introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer.
These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.
arXiv Detail & Related papers (2020-02-10T03:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.