Online Recommendations for Agents with Discounted Adaptive Preferences
- URL: http://arxiv.org/abs/2302.06014v2
- Date: Tue, 6 Feb 2024 16:08:10 GMT
- Title: Online Recommendations for Agents with Discounted Adaptive Preferences
- Authors: Arpit Agarwal, William Brown
- Abstract summary: bandit recommendations problem in which an agent's preferences evolve as a function of past selections.
We show an algorithm which obtains efficient sublinear regret with respect to nearly the $textitentire$ item simplex.
- Score: 17.501559059079806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a bandit recommendations problem in which an agent's preferences
(representing selection probabilities over recommended items) evolve as a
function of past selections, according to an unknown $\textit{preference
model}$. In each round, we show a menu of $k$ items (out of $n$ total) to the
agent, who then chooses a single item, and we aim to minimize regret with
respect to some $\textit{target set}$ (a subset of the item simplex) for
adversarial losses over the agent's choices. Extending the setting from Agarwal
and Brown (2022), where uniform-memory agents were considered, here we allow
for non-uniform memory in which a discount factor is applied to the agent's
memory vector at each subsequent round. In the "long-term memory" regime (when
the effective memory horizon scales with $T$ sublinearly), we show that
efficient sublinear regret is obtainable with respect to the set of
$\textit{everywhere instantaneously realizable distributions}$ (the "EIRD set",
as formulated in prior work) for any $\textit{smooth}$ preference model.
Further, for preferences which are bounded above and below by linear functions
of memory weight (we call these "scale-bounded" preferences) we give an
algorithm which obtains efficient sublinear regret with respect to nearly the
$\textit{entire}$ item simplex. We show an NP-hardness result for expanding to
targets beyond EIRD in general. In the "short-term memory" regime (when the
memory horizon is constant), we show that scale-bounded preferences again
enable efficient sublinear regret for nearly the entire simplex even without
smoothness if losses do not change too frequently, yet we show an
information-theoretic barrier for competing against the EIRD set under
arbitrary smooth preference models even when losses are constant.
Related papers
- Revisiting Weighted Strategy for Non-stationary Parametric Bandits and MDPs [56.246783503873225]
This paper revisits the weighted strategy for non-stationary parametric bandits.<n>We propose a simpler weight-based algorithm that is as efficient as window/restart-based algorithms.<n>Our framework can be used to improve regret bounds of other parametric bandits.
arXiv Detail & Related papers (2026-01-03T04:50:21Z) - Sparse Linear Bandits with Blocking Constraints [22.01704171400845]
We investigate the high-dimensional sparse linear bandits problem in a data-poor regime.<n>We show novel offline statistical guarantees of the lasso estimator for the linear model.<n>We propose a meta-algorithm based on corralling that does not need knowledge of optimal sparsity parameter $k$ at minimal cost to regret.
arXiv Detail & Related papers (2024-10-26T01:42:03Z) - Order-Optimal Instance-Dependent Bounds for Offline Reinforcement Learning with Preference Feedback [56.6950165117658]
We consider offline reinforcement learning with preference feedback in which the implicit reward is a linear function of an unknown parameter.
We propose an algorithm, underlineRL with underlineLocally underlineOptimal underlineWeights or sc RL-LOW, which yields a simple regret of $exp.
We observe that the lower and upper bounds on the simple regret match order-wise in the exponent, demonstrating order-wise optimality of sc RL-LOW.
arXiv Detail & Related papers (2024-06-18T02:03:12Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Cooperative Thresholded Lasso for Sparse Linear Bandit [6.52540785559241]
We present a novel approach to address the multi-agent sparse contextual linear bandit problem.
It is first algorithm that tackles row-wise distributed data in sparse linear bandits.
It is widely applicable to high-dimensional multi-agent problems where efficient feature extraction is critical for minimizing regret.
arXiv Detail & Related papers (2023-05-30T16:05:44Z) - Borda Regret Minimization for Generalized Linear Dueling Bandits [65.09919504862496]
We study the Borda regret minimization problem for dueling bandits, which aims to identify the item with the highest Borda score.
We propose a rich class of generalized linear dueling bandit models, which cover many existing models.
Our algorithm achieves an $tildeO(d2/3 T2/3)$ regret, which is also optimal.
arXiv Detail & Related papers (2023-03-15T17:59:27Z) - Sketchy: Memory-efficient Adaptive Regularization with Frequent
Directions [22.09320263962004]
We find the spectra of the Kronecker-factored gradient covariance matrix in deep learning (DL) training tasks are concentrated on a small leading eigenspace.
We describe a generic method for reducing memory and compute requirements of maintaining a matrix preconditioner.
We show extensions of our work to Shampoo, resulting in a method competitive in quality with Shampoo and Adam, yet requiring only sub-linear memory for tracking second moments.
arXiv Detail & Related papers (2023-02-07T21:50:06Z) - Parameter and Feature Selection in Stochastic Linear Bandits [38.909757749493934]
We study two model selection settings in linear bandits (LB)
In the first setting, the reward parameter of the LB problem is arbitrarily selected from $M$ models represented as overlapping balls in $mathbb Rd$.
In the second setting, the expected reward of the LB problem is in the linear span of at least one of $M$ feature maps (models)
arXiv Detail & Related papers (2021-06-09T20:37:29Z) - High-Dimensional Sparse Linear Bandits [67.9378546011416]
We derive a novel $Omega(n2/3)$ dimension-free minimax regret lower bound for sparse linear bandits in the data-poor regime.
We also prove a dimension-free $O(sqrtn)$ regret upper bound under an additional assumption on the magnitude of the signal for relevant features.
arXiv Detail & Related papers (2020-11-08T16:48:11Z) - Stochastic Linear Bandits with Protected Subspace [51.43660657268171]
We study a variant of the linear bandit problem wherein we optimize a linear objective function but rewards are accrued only to an unknown subspace.
In particular, at each round, the learner must choose whether to query the objective or the protected subspace alongside choosing an action.
Our algorithm, derived from the OFUL principle, uses some of the queries to get an estimate of the protected space.
arXiv Detail & Related papers (2020-11-02T14:59:39Z) - Efficient Learning in Non-Stationary Linear Markov Decision Processes [17.296084954104415]
We study episodic reinforcement learning in non-stationary linear (a.k.a. low-rank) Markov Decision Processes (MDPs)
We propose OPT-WLSVI an optimistic model-free algorithm based on weighted least squares value which uses exponential weights to smoothly forget data that are far in the past.
We show that our algorithm, when competing against the best policy at each time, achieves a regret that is upper bounded by $widetildemathcalO(d5/4H2 Delta1/4 K3/4)$ where $d$
arXiv Detail & Related papers (2020-10-24T11:02:45Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.