Distance-based Positive and Unlabeled Learning for Ranking
- URL: http://arxiv.org/abs/2005.10700v3
- Date: Wed, 28 Sep 2022 16:24:42 GMT
- Title: Distance-based Positive and Unlabeled Learning for Ranking
- Authors: Hayden S. Helm, Amitabh Basu, Avanti Athreya, Youngser Park, Joshua T.
Vogelstein, Carey E. Priebe, Michael Winding, Marta Zlatic, Albert Cardona,
Patrick Bourke, Jonathan Larson, Marah Abdin, Piali Choudhury, Weiwei Yang,
Christopher W. White
- Abstract summary: Learning to rank is a problem of general interest.
We show that learning to rank via combining representations using an integer linear program is effective when the supervision is as light as "these few items are similar to your item of interest"
- Score: 13.339237388350043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to rank -- producing a ranked list of items specific to a query and
with respect to a set of supervisory items -- is a problem of general interest.
The setting we consider is one in which no analytic description of what
constitutes a good ranking is available. Instead, we have a collection of
representations and supervisory information consisting of a (target item,
interesting items set) pair. We demonstrate analytically, in simulation, and in
real data examples that learning to rank via combining representations using an
integer linear program is effective when the supervision is as light as "these
few items are similar to your item of interest." While this nomination task is
quite general, for specificity we present our methodology from the perspective
of vertex nomination in graphs. The methodology described herein is model
agnostic.
Related papers
- Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - Ranking In Generalized Linear Bandits [38.567816347428774]
We study the ranking problem in generalized linear bandits.
In recommendation systems, displaying an ordered list of the most attractive items is not always optimal.
arXiv Detail & Related papers (2022-06-30T21:38:00Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Active clustering for labeling training data [0.8029049649310211]
We propose a setting for training data gathering where the human experts perform the comparatively cheap task of answering pairwise queries.
We analyze the algorithms that minimize the average number of queries required to cluster the items and analyze their complexity.
arXiv Detail & Related papers (2021-10-27T15:35:58Z) - Leveraging semantically similar queries for ranking via combining
representations [20.79800117378761]
In data-scarce settings, the amount of labeled data available for a particular query can lead to a highly variable and ineffective ranking function.
One way to mitigate the effect of the small amount of data is to leverage information from semantically similar queries.
We describe and explore this phenomenon in the context of the bias-variance trade off and apply it to the data-scarce settings of a Bing navigational graph and the Drosophila larva connectome.
arXiv Detail & Related papers (2021-06-23T18:36:20Z) - Salient Object Ranking with Position-Preserved Attention [44.94722064885407]
We study the Salient Object Ranking (SOR) task, which manages to assign a ranking order of each detected object according to its visual saliency.
We propose the first end-to-end framework of the SOR task and solve it in a multi-task learning fashion.
We also introduce a Position-Preserved Attention (PPA) module tailored for the SOR branch.
arXiv Detail & Related papers (2021-06-09T13:00:05Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.