Weight Set Decomposition for Weighted Rank Aggregation: An interpretable
and visual decision support tool
- URL: http://arxiv.org/abs/2206.00001v1
- Date: Tue, 31 May 2022 19:09:55 GMT
- Title: Weight Set Decomposition for Weighted Rank Aggregation: An interpretable
and visual decision support tool
- Authors: Tyler Perini, Amy Langville, Glenn Kramer, Jeff Shrager, Mark Shapiro
- Abstract summary: We show the wealth information that is available for the weighted rank aggregation problem due to its structure.
We apply weight set decomposition to the set of convex properties useful for decomposition, and visualize the indifference regions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The problem of interpreting or aggregating multiple rankings is common to
many real-world applications. Perhaps the simplest and most common approach is
a weighted rank aggregation, wherein a (convex) weight is applied to each input
ranking and then ordered. This paper describes a new tool for visualizing and
displaying ranking information for the weighted rank aggregation method.
Traditionally, the aim of rank aggregation is to summarize the information from
the input rankings and provide one final ranking that hopefully represents a
more accurate or truthful result than any one input ranking. While such an
aggregated ranking is, and clearly has been, useful to many applications, it
also obscures information. In this paper, we show the wealth of information
that is available for the weighted rank aggregation problem due to its
structure. We apply weight set decomposition to the set of convex multipliers,
study the properties useful for understanding this decomposition, and visualize
the indifference regions. This methodology reveals information--that is
otherwise collapsed by the aggregated ranking--into a useful, interpretable,
and intuitive decision support tool. Included are multiple illustrative
examples, along with heuristic and exact algorithms for computing the weight
set decomposition.
Related papers
- AGRaME: Any-Granularity Ranking with Multi-Vector Embeddings [53.78802457488845]
We introduce the idea of any-granularity ranking, which leverages multi-vector embeddings to rank at varying levels of granularity.
We demonstrate the application of proposition-level ranking to post-hoc citation addition in retrieval-augmented generation.
arXiv Detail & Related papers (2024-05-23T20:04:54Z) - Improved theoretical guarantee for rank aggregation via spectral method [1.0152838128195467]
Given pairwise comparisons between multiple items, how to rank them so that the ranking matches the observations?
This problem, known as rank aggregation, has found many applications in sports, recommendation systems, and other web applications.
Here, each pairwise comparison is a corrupted copy of the true score difference.
arXiv Detail & Related papers (2023-09-07T16:01:47Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Learning Representations without Compositional Assumptions [79.12273403390311]
We propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges.
We also introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically.
arXiv Detail & Related papers (2023-05-31T10:36:10Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - Leveraging semantically similar queries for ranking via combining
representations [20.79800117378761]
In data-scarce settings, the amount of labeled data available for a particular query can lead to a highly variable and ineffective ranking function.
One way to mitigate the effect of the small amount of data is to leverage information from semantically similar queries.
We describe and explore this phenomenon in the context of the bias-variance trade off and apply it to the data-scarce settings of a Bing navigational graph and the Drosophila larva connectome.
arXiv Detail & Related papers (2021-06-23T18:36:20Z) - Towards Improved and Interpretable Deep Metric Learning via Attentive
Grouping [103.71992720794421]
Grouping has been commonly used in deep metric learning for computing diverse features.
We propose an improved and interpretable grouping method to be integrated flexibly with any metric learning framework.
arXiv Detail & Related papers (2020-11-17T19:08:24Z) - Spectral Methods for Ranking with Scarce Data [16.023774341912386]
We modify a popular and well studied method, RankCentrality for rank aggregation to account for few comparisons.
We incorporate feature information that outperforms state-of-the-art methods in practice.
arXiv Detail & Related papers (2020-07-02T19:17:35Z) - How Reliable are University Rankings? [0.7646713951724009]
We take a fresh look at this ranking scheme using the public College dataset.
We show in multiple ways that this ranking scheme is not reliable and cannot be trusted as authoritative.
We conclude by making the case that all data and methods used for rankings should be made open for validation and repeatability.
arXiv Detail & Related papers (2020-04-20T01:00:59Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z) - Rough Set based Aggregate Rank Measure & its Application to Supervised
Multi Document Summarization [0.0]
The paper proposes a novel Rough Set based membership called Rank Measure.
It shall be utilized for ranking the elements to a particular class.
The results proved to have significant improvement in accuracy.
arXiv Detail & Related papers (2020-02-09T01:03:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.