Deep Attentive Ranking Networks for Learning to Order Sentences
- URL: http://arxiv.org/abs/2001.00056v1
- Date: Tue, 31 Dec 2019 19:54:27 GMT
- Title: Deep Attentive Ranking Networks for Learning to Order Sentences
- Authors: Pawan Kumar, Dhanajit Brahma, Harish Karnick, Piyush Rai
- Abstract summary: We present an attention-based ranking framework for learning to order sentences given a paragraph.
Our framework is built on a bidirectional sentence encoder and a self-attention based transformer network.
It allows seamless training using a variety of ranking based loss functions, such as pointwise, pairwise, and listwise ranking.
- Score: 31.70737249775046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an attention-based ranking framework for learning to order
sentences given a paragraph. Our framework is built on a bidirectional sentence
encoder and a self-attention based transformer network to obtain an input order
invariant representation of paragraphs. Moreover, it allows seamless training
using a variety of ranking based loss functions, such as pointwise, pairwise,
and listwise ranking. We apply our framework on two tasks: Sentence Ordering
and Order Discrimination. Our framework outperforms various state-of-the-art
methods on these tasks on a variety of evaluation metrics. We also show that it
achieves better results when using pairwise and listwise ranking losses, rather
than the pointwise ranking loss, which suggests that incorporating relative
positions of two or more sentences in the loss function contributes to better
learning.
Related papers
- AGRaME: Any-Granularity Ranking with Multi-Vector Embeddings [53.78802457488845]
We introduce the idea of any-granularity ranking, which leverages multi-vector embeddings to rank at varying levels of granularity.
We demonstrate the application of proposition-level ranking to post-hoc citation addition in retrieval-augmented generation.
arXiv Detail & Related papers (2024-05-23T20:04:54Z) - Partitioned Saliency Ranking with Dense Pyramid Transformers [4.449304130658638]
Saliency ranking has emerged as a challenging task focusing on assessing the degree of saliency at instance-level.
Previous approaches undertake the saliency ranking by directly sorting the rank scores of salient instances, which have not explicitly resolved the inherent ambiguities.
We propose the ranking by partition paradigm, which segments unordered salient instances into partitions and then ranks them based on the correlations among these partitions.
arXiv Detail & Related papers (2023-08-01T02:33:10Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Rank over Class: The Untapped Potential of Ranking in Natural Language
Processing [8.637110868126546]
We argue that many tasks which are currently addressed using classification are in fact being shoehorned into a classification mould.
We propose a novel end-to-end ranking approach consisting of a Transformer network responsible for producing representations for a pair of text sequences.
In an experiment on a heavily-skewed sentiment analysis dataset, converting ranking results to classification labels yields an approximately 22% improvement over state-of-the-art text classification.
arXiv Detail & Related papers (2020-09-10T22:18:57Z) - A Unified Framework of Surrogate Loss by Refactoring and Interpolation [65.60014616444623]
We introduce UniLoss, a unified framework to generate surrogate losses for training deep networks with gradient descent.
We validate the effectiveness of UniLoss on three tasks and four datasets.
arXiv Detail & Related papers (2020-07-27T21:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.