Learning Multinomial Logits in $O(n \log n)$ time
- URL: http://arxiv.org/abs/2601.04423v1
- Date: Wed, 07 Jan 2026 22:07:44 GMT
- Title: Learning Multinomial Logits in $O(n \log n)$ time
- Authors: Flavio Chierichetti, Mirko Giacchini, Ravi Kumar, Silvio Lattanzi, Alessandro Panconesi, Erasmo Tani, Andrew Tomkins,
- Abstract summary: A Multinomial Logit (MNL) model is composed of a finite universe of items $[n]=1,..., n$, each assigned a positive weight.<n>A query specifies an admissible subset -- called a slate -- and the model chooses one item from that slate with probability proportional to its weight.<n>This query model is also known as the Plackett-Luce model or conditional sampling oracle in the literature.
- Score: 56.23331174813387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A Multinomial Logit (MNL) model is composed of a finite universe of items $[n]=\{1,..., n\}$, each assigned a positive weight. A query specifies an admissible subset -- called a slate -- and the model chooses one item from that slate with probability proportional to its weight. This query model is also known as the Plackett-Luce model or conditional sampling oracle in the literature. Although MNLs have been studied extensively, a basic computational question remains open: given query access to slates, how efficiently can we learn weights so that, for every slate, the induced choice distribution is within total variation distance $\varepsilon$ of the ground truth? This question is central to MNL learning and has direct implications for modern recommender system interfaces. We provide two algorithms for this task, one with adaptive queries and one with non-adaptive queries. Each algorithm outputs an MNL $M'$ that induces, for each slate $S$, a distribution $M'_S$ on $S$ that is within $\varepsilon$ total variation distance of the true distribution. Our adaptive algorithm makes $O\left(\frac{n}{\varepsilon^{3}}\log n\right)$ queries, while our non-adaptive algorithm makes $O\left(\frac{n^{2}}{\varepsilon^{3}}\log n \log\frac{n}{\varepsilon}\right)$ queries. Both algorithms query only slates of size two and run in time proportional to their query complexity. We complement these upper bounds with lower bounds of $Ω\left(\frac{n}{\varepsilon^{2}}\log n\right)$ for adaptive queries and $Ω\left(\frac{n^{2}}{\varepsilon^{2}}\log n\right)$ for non-adaptive queries, thus proving that our adaptive algorithm is optimal in its dependence on the support size $n$, while the non-adaptive one is tight within a $\log n$ factor.
Related papers
- High-accuracy sampling for diffusion models and log-concave distributions [70.90863485771405]
We present algorithms for diffusion model sampling which obtain $$-error in $mathrmpolylog (1/)$ steps.<n>Our approach also yields the first $mathrmpolylog (1/)$ complexity sampler for general log-concave distributions.
arXiv Detail & Related papers (2026-02-01T17:05:31Z) - Learning Partitions with Optimal Query and Round Complexities [16.815943270621638]
We consider the basic problem of learning an unknown partition of $n$ elements into at most $k$ sets.<n>Non-adaptive algorithms require $Theta(n2)$ queries, while adaptive algorithms require $Theta(nk)$ queries.<n>Our algorithm only needs $O(log log n)$ rounds to attain the optimal $O(nk)$ query complexity.
arXiv Detail & Related papers (2025-05-08T07:27:29Z) - On the query complexity of sampling from non-log-concave distributions [2.4253233571593547]
We study the problem of sampling from a $d$-dimensional distribution with density $p(x)propto e-f(x)$, which does not necessarily satisfy good isoperimetric conditions.<n>We show that for a wide range of parameters, sampling is strictly easier than optimization by a super-exponential factor in the dimension $d$.
arXiv Detail & Related papers (2025-02-10T06:54:16Z) - Active Learning of General Halfspaces: Label Queries vs Membership Queries [35.41111301965335]
We show that any active learner requires label complexity of $tildeOmega(d/(log(m)epsilon))$, where $m$ is the number of unlabeled examples.<n>We give a computationally efficient learner with query complexity of $tildeO(min1/p, 1/epsilon + dcdot polylog (1/epsilon))$ achieving error guarantee of $O(opt)+epsilon$.
arXiv Detail & Related papers (2024-12-31T15:40:48Z) - Provable Scaling Laws for the Test-Time Compute of Large Language Models [84.00141420901038]
We propose two algorithms that enjoy provable scaling laws for the test-time compute of large language models.<n>One is a two-stage knockout-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.<n>The other is a two-stage league-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - Clustering with Non-adaptive Subset Queries [16.662507957069813]
Given a query $S subset U$, $|S|=2$, the oracle returns yes if the points are in the same cluster and no otherwise.<n>For adaptive algorithms with pair-wise queries, the number of required queries is known to be $Theta(nk)$.<n>Non-adaptive schemes require $Omega(n2)$ queries, which matches the trivial $O(n2)$ upper bound attained by querying every pair of points.
arXiv Detail & Related papers (2024-09-17T05:56:07Z) - From Adaptive Query Release to Machine Unlearning [34.25760971846068]
We formalize the problem of machine unlearning as design of efficient unlearning algorithms corresponding to learning algorithms.
We show that unlearning in convex optimization (SCO) can be reduced to the above, yielding improved guarantees for the problem.
arXiv Detail & Related papers (2023-07-20T20:46:39Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Online Lewis Weight Sampling [62.38157566916501]
Cohen and Peng introduced Lewis weight sampling to the theoretical computer science community.
Several works have extended this important primitive to other settings, including the online coreset, sliding window, and adversarial streaming models.
We design the first nearly optimal $ell_p$ subspace embeddings for all $pin(0,infty)$ in the online coreset, sliding window, and the adversarial streaming models.
arXiv Detail & Related papers (2022-07-17T19:40:51Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.