Efficient and Accurate Learning of Mixtures of Plackett-Luce Models
- URL: http://arxiv.org/abs/2302.05343v1
- Date: Fri, 10 Feb 2023 16:00:40 GMT
- Title: Efficient and Accurate Learning of Mixtures of Plackett-Luce Models
- Authors: Duc Nguyen and Anderson Y. Zhang
- Abstract summary: Mixture models of Plackett-Luce (PL) are an active research area of both theoretical and practical significance.
We propose an algorithm that can provide a provably accurate initial estimate and an EM algorithm that maximizes the true log-likelihood function efficiently.
- Score: 5.216020588360421
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mixture models of Plackett-Luce (PL) -- one of the most fundamental ranking
models -- are an active research area of both theoretical and practical
significance. Most previously proposed parameter estimation algorithms
instantiate the EM algorithm, often with random initialization. However, such
an initialization scheme may not yield a good initial estimate and the
algorithms require multiple restarts, incurring a large time complexity. As for
the EM procedure, while the E-step can be performed efficiently, maximizing the
log-likelihood in the M-step is difficult due to the combinatorial nature of
the PL likelihood function (Gormley and Murphy 2008). Therefore, previous
authors favor algorithms that maximize surrogate likelihood functions (Zhao et
al. 2018, 2020). However, the final estimate may deviate from the true maximum
likelihood estimate as a consequence. In this paper, we address these known
limitations. We propose an initialization algorithm that can provide a provably
accurate initial estimate and an EM algorithm that maximizes the true
log-likelihood function efficiently. Experiments on both synthetic and real
datasets show that our algorithm is competitive in terms of accuracy and speed
to baseline algorithms, especially on datasets with a large number of items.
Related papers
- Efficient first-order algorithms for large-scale, non-smooth maximum
entropy models with application to wildfire science [0.0]
We present novel algorithms for training large-scale, non-smooth Maxent models.
Our proposed first-order algorithms leverage the Kullback-Leibler divergence to train large-scale and non-smooth Maxent models efficiently.
Our numerical results show that our algorithms outperform the state of the arts by one order of magnitude.
arXiv Detail & Related papers (2024-03-11T15:33:55Z) - Quantum-Based Feature Selection for Multi-classification Problem in
Complex Systems with Edge Computing [15.894122816099133]
A quantum-based feature selection algorithm for the multi-classification problem, namely, QReliefF, is proposed.
Our algorithm is superior in finding the nearest neighbor, reducing the complexity from O(M) to O(sqrt(M)).
arXiv Detail & Related papers (2023-10-01T03:57:13Z) - An optimized quantum minimum searching algorithm with sure-success
probability and its experiment simulation with Cirq [4.987556615430226]
We propose an optimized quantum minimum searching algorithm with sure-success probability.
The performance evaluation shows that our algorithm has higher accuracy and efficiency than DHA algorithm.
arXiv Detail & Related papers (2023-09-25T14:07:27Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - An Application of a Multivariate Estimation of Distribution Algorithm to
Cancer Chemotherapy [59.40521061783166]
Chemotherapy treatment for cancer is a complex optimisation problem with a large number of interacting variables and constraints.
We show that the more sophisticated algorithm would yield better performance on a complex problem like this.
We hypothesise that this is caused by the more sophisticated algorithm being impeded by the large number of interactions in the problem.
arXiv Detail & Related papers (2022-05-17T15:28:46Z) - Quantum Algorithms for Prediction Based on Ridge Regression [0.7612218105739107]
We propose a quantum algorithm based on ridge regression model, which get the optimal fitting parameters.
The proposed algorithm has a wide range of application and the proposed algorithm can be used as a subroutine of other quantum algorithms.
arXiv Detail & Related papers (2021-04-27T11:03:52Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.