Boosting the Learning for Ranking Patterns
- URL: http://arxiv.org/abs/2203.02696v1
- Date: Sat, 5 Mar 2022 10:22:44 GMT
- Title: Boosting the Learning for Ranking Patterns
- Authors: Nassim Belmecheri and Noureddine Aribi and Nadjib Lazaar and Yahia
Lebbah and Samir Loudni
- Abstract summary: This paper formulates the problem of learning pattern ranking functions as a multi-criteria decision making problem.
Our approach aggregates different interestingness measures into a single weighted linear ranking function, using an interactive learning procedure.
Experiments conducted on well-known datasets show that our approach significantly reduces the running time and returns precise pattern ranking.
- Score: 6.142272540492935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discovering relevant patterns for a particular user remains a challenging
tasks in data mining. Several approaches have been proposed to learn
user-specific pattern ranking functions. These approaches generalize well, but
at the expense of the running time. On the other hand, several measures are
often used to evaluate the interestingness of patterns, with the hope to reveal
a ranking that is as close as possible to the user-specific ranking. In this
paper, we formulate the problem of learning pattern ranking functions as a
multicriteria decision making problem. Our approach aggregates different
interestingness measures into a single weighted linear ranking function, using
an interactive learning procedure that operates in either passive or active
modes. A fast learning step is used for eliciting the weights of all the
measures by mean of pairwise comparisons.
This approach is based on Analytic Hierarchy Process (AHP), and a set of
user-ranked patterns to build a preference matrix, which compares the
importance of measures according to the user-specific interestingness. A
sensitivity based heuristic is proposed for the active learning mode, in order
to insure high quality results with few user ranking queries. Experiments
conducted on well-known datasets show that our approach significantly reduces
the running time and returns precise pattern ranking, while being robust to
user-error compared with state-of-the-art approaches.
Related papers
- Batch Active Learning of Reward Functions from Human Preferences [33.39413552270375]
Preference-based learning enables reliable labeling by querying users with preference questions.
Active querying methods are commonly employed in preference-based learning to generate more informative data.
We develop a set of novel algorithms that enable efficient learning of reward functions using as few data samples as possible.
arXiv Detail & Related papers (2024-02-24T08:07:48Z) - Active Learning of Ordinal Embeddings: A User Study on Football Data [4.856635699699126]
Humans innately measure distance between instances in an unlabeled dataset using an unknown similarity function.
This work uses deep metric learning to learn these user-defined similarity functions from few annotations for a large football trajectory dataset.
arXiv Detail & Related papers (2022-07-26T07:55:23Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Sequential Search with Off-Policy Reinforcement Learning [48.88165680363482]
We propose a highly scalable hybrid learning model that consists of an RNN learning framework and an attention model.
As a novel optimization step, we fit multiple short user sequences in a single RNN pass within a training batch, by solving a greedy knapsack problem on the fly.
We also explore the use of off-policy reinforcement learning in multi-session personalized search ranking.
arXiv Detail & Related papers (2022-02-01T06:52:40Z) - Batch versus Sequential Active Learning for Recommender Systems [3.7796614675664397]
We show that sequential mode produces the most accurate recommendations for dense data sets.
For most active learners, the best predictor turned out to be FunkSVD in combination with sequential mode.
arXiv Detail & Related papers (2022-01-19T12:50:36Z) - Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise
Comparisons [85.5955376526419]
In rank aggregation problems, users exhibit various accuracy levels when comparing pairs of items.
We propose an elimination-based active sampling strategy, which estimates the ranking of items via noisy pairwise comparisons.
We prove that our algorithm can return the true ranking of items with high probability.
arXiv Detail & Related papers (2021-10-08T13:51:55Z) - Hyper Meta-Path Contrastive Learning for Multi-Behavior Recommendation [61.114580368455236]
User purchasing prediction with multi-behavior information remains a challenging problem for current recommendation systems.
We propose the concept of hyper meta-path to construct hyper meta-paths or hyper meta-graphs to explicitly illustrate the dependencies among different behaviors of a user.
Thanks to the recent success of graph contrastive learning, we leverage it to learn embeddings of user behavior patterns adaptively instead of assigning a fixed scheme to understand the dependencies among different behaviors.
arXiv Detail & Related papers (2021-09-07T04:28:09Z) - Analysis of Multivariate Scoring Functions for Automatic Unbiased
Learning to Rank [14.827143632277274]
AutoULTR algorithms that jointly learn user bias models (i.e., propensity models) with unbiased rankers have received a lot of attention due to their superior performance and low deployment cost in practice.
Recent advances in context-aware learning-to-rank models have shown that multivariate scoring functions, which read multiple documents together and predict their ranking scores jointly, are more powerful than uni-variate ranking functions in ranking tasks with human-annotated relevance labels.
Our experiments with synthetic clicks on two large-scale benchmark datasets show that AutoULTR models with permutation-invariant multivariate scoring functions significantly outperform
arXiv Detail & Related papers (2020-08-20T16:31:59Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.