Scalable Personalised Item Ranking through Parametric Density Estimation
- URL: http://arxiv.org/abs/2105.04769v1
- Date: Tue, 11 May 2021 03:38:16 GMT
- Title: Scalable Personalised Item Ranking through Parametric Density Estimation
- Authors: Riku Togashi, Masahiro Kato, Mayu Otani, Tetsuya Sakai, Shin'ichi
Satoh
- Abstract summary: Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
- Score: 53.44830012414444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning from implicit feedback is challenging because of the difficult
nature of the one-class problem: we can observe only positive examples. Most
conventional methods use a pairwise ranking approach and negative samplers to
cope with the one-class problem. However, such methods have two main drawbacks
particularly in large-scale applications; (1) the pairwise approach is severely
inefficient due to the quadratic computational cost; and (2) even recent
model-based samplers (e.g. IRGAN) cannot achieve practical efficiency due to
the training of an extra model.
In this paper, we propose a learning-to-rank approach, which achieves
convergence speed comparable to the pointwise counterpart while performing
similarly to the pairwise counterpart in terms of ranking effectiveness. Our
approach estimates the probability densities of positive items for each user
within a rich class of distributions, viz. \emph{exponential family}. In our
formulation, we derive a loss function and the appropriate negative sampling
distribution based on maximum likelihood estimation. We also develop a
practical technique for risk approximation and a regularisation scheme. We then
discuss that our single-model approach is equivalent to an IRGAN variant under
a certain condition. Through experiments on real-world datasets, our approach
outperforms the pointwise and pairwise counterparts in terms of effectiveness
and efficiency.
Related papers
- Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Stabilizing Subject Transfer in EEG Classification with Divergence
Estimation [17.924276728038304]
We propose several graphical models to describe an EEG classification task.
We identify statistical relationships that should hold true in an idealized training scenario.
We design regularization penalties to enforce these relationships in two stages.
arXiv Detail & Related papers (2023-10-12T23:06:52Z) - Double logistic regression approach to biased positive-unlabeled data [3.6594988197536344]
We consider parametric approach to the problem of joint estimation of posterior probability and propensity score functions.
Motivated by this, we propose two approaches to their estimation: joint maximum likelihood method and the second approach based on alternating alternating expressions.
Our experimental results show that the proposed methods are comparable or better than the existing methods based on Expectation-Maximisation scheme.
arXiv Detail & Related papers (2022-09-16T08:32:53Z) - Mixture Proportion Estimation and PU Learning: A Modern Approach [47.34499672878859]
Given only positive examples and unlabeled examples, we might hope to estimate an accurate positive-versus-negative classifier.
classical methods for both problems break down in high-dimensional settings.
We propose two simple techniques: Best Bin Estimation (BBE) and Value Ignoring Risk (CVIR)
arXiv Detail & Related papers (2021-11-01T14:42:23Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - DEMI: Discriminative Estimator of Mutual Information [5.248805627195347]
Estimating mutual information between continuous random variables is often intractable and challenging for high-dimensional data.
Recent progress has leveraged neural networks to optimize variational lower bounds on mutual information.
Our approach is based on training a classifier that provides the probability that a data sample pair is drawn from the joint distribution.
arXiv Detail & Related papers (2020-10-05T04:19:27Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.