Hierarchical Ranking for Answer Selection
- URL: http://arxiv.org/abs/2102.00677v1
- Date: Mon, 1 Feb 2021 07:35:52 GMT
- Title: Hierarchical Ranking for Answer Selection
- Authors: Hang Gao, Mengting Hu, Renhong Cheng, Tiegang Gao
- Abstract summary: We propose a novel strategy for answer selection, called hierarchical ranking.
We introduce three levels of ranking: point-level ranking, pair-level ranking, and list-level ranking.
Experimental results on two public datasets, WikiQA and TREC-QA, demonstrate that the proposed hierarchical ranking is effective.
- Score: 19.379777219863964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Answer selection is a task to choose the positive answers from a pool of
candidate answers for a given question. In this paper, we propose a novel
strategy for answer selection, called hierarchical ranking. We introduce three
levels of ranking: point-level ranking, pair-level ranking, and list-level
ranking. They formulate their optimization objectives by employing supervisory
information from different perspectives to achieve the same goal of ranking
candidate answers. Therefore, the three levels of ranking are related and they
can promote each other. We take the well-performed compare-aggregate model as
the backbone and explore three schemes to implement the idea of applying the
hierarchical rankings jointly: the scheme under the Multi-Task Learning (MTL)
strategy, the Ranking Integration (RI) scheme, and the Progressive Ranking
Integration (PRI) scheme. Experimental results on two public datasets, WikiQA
and TREC-QA, demonstrate that the proposed hierarchical ranking is effective.
Our method achieves state-of-the-art (non-BERT) performance on both TREC-QA and
WikiQA.
Related papers
- AGRaME: Any-Granularity Ranking with Multi-Vector Embeddings [53.78802457488845]
We introduce the idea of any-granularity ranking, which leverages multi-vector embeddings to rank at varying levels of granularity.
We demonstrate the application of proposition-level ranking to post-hoc citation addition in retrieval-augmented generation.
arXiv Detail & Related papers (2024-05-23T20:04:54Z) - LiPO: Listwise Preference Optimization through Learning-to-Rank [62.02782819559389]
Policy can learn more effectively from a ranked list of plausible responses given the prompt.
We show that LiPO-$lambda$ can outperform DPO variants and SLiC by a clear margin on several preference alignment tasks.
arXiv Detail & Related papers (2024-02-02T20:08:10Z) - Replace Scoring with Arrangement: A Contextual Set-to-Arrangement
Framework for Learning-to-Rank [40.81502990315285]
Learning-to-rank is a core technique in the top-N recommendation task, where an ideal ranker would be a mapping from an item set to an arrangement.
Most existing solutions fall in the paradigm of probabilistic ranking principle (PRP), i.e., first score each item in the candidate set and then perform a sort operation to generate the top ranking list.
We propose Set-To-Arrangement Ranking (STARank), a new framework directly generates the permutations of the candidate items without the need for individually scoring and sort operations.
arXiv Detail & Related papers (2023-08-05T12:22:26Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - When and What to Ask Through World States and Text Instructions: IGLU
NLP Challenge Solution [6.36729066736314]
In collaborative tasks, effective communication is crucial for achieving joint goals.
We aim to develop an intelligent builder agent to build structures based on user input through dialogue.
arXiv Detail & Related papers (2023-05-09T20:23:17Z) - Multi-Task Off-Policy Learning from Bandit Feedback [54.96011624223482]
We propose a hierarchical off-policy optimization algorithm (HierOPO), which estimates the parameters of the hierarchical model and then acts pessimistically with respect to them.
We prove per-task bounds on the suboptimality of the learned policies, which show a clear improvement over not using the hierarchical model.
Our theoretical and empirical results show a clear advantage of using the hierarchy over solving each task independently.
arXiv Detail & Related papers (2022-12-09T08:26:27Z) - Decision Making for Hierarchical Multi-label Classification with
Multidimensional Local Precision Rate [4.812468844362369]
We introduce a new statistic called the multidimensional local precision rate (mLPR) for each object in each class.
We show that classification decisions made by simply sorting objects across classes in descending order of their mLPRs can, in theory, ensure the class hierarchy.
In response, we introduce HierRank, a new algorithm that maximizes an empirical version of CATCH using estimated mLPRs while respecting the hierarchy.
arXiv Detail & Related papers (2022-05-16T17:43:35Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Heuristic Search for Rank Aggregation with Application to Label Ranking [16.275063634853584]
We propose an effective hybrid evolutionary ranking algorithm to solve the rank aggregation problem.
The algorithm features a semantic crossover based on concordant pairs and a late acceptance local search reinforced by an efficient incremental evaluation technique.
Experiments are conducted to assess the algorithm, indicating a highly competitive performance on benchmark instances.
arXiv Detail & Related papers (2022-01-11T11:43:17Z) - RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base
Question Answering [57.94658176442027]
We present RnG-KBQA, a Rank-and-Generate approach for KBQA.
We achieve new state-of-the-art results on GrailQA and WebQSP datasets.
arXiv Detail & Related papers (2021-09-17T17:58:28Z) - Ensemble- and Distance-Based Feature Ranking for Unsupervised Learning [2.7921429800866533]
We propose two novel (groups of) methods for unsupervised feature ranking and selection.
The first group includes feature ranking scores (Genie3 score, RandomForest score) that are computed from ensembles of predictive clustering trees.
The second method is URelief, the unsupervised extension of the Relief family of feature ranking algorithms.
arXiv Detail & Related papers (2020-11-23T19:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.