Uncertainty Quantification for Demand Prediction in Contextual Dynamic
Pricing
- URL: http://arxiv.org/abs/2003.07017v2
- Date: Mon, 31 Aug 2020 16:39:33 GMT
- Title: Uncertainty Quantification for Demand Prediction in Contextual Dynamic
Pricing
- Authors: Yining Wang and Xi Chen and Xiangyu Chang and Dongdong Ge
- Abstract summary: We study the problem of constructing accurate confidence intervals for the demand function.
We develop a debiased approach and provide the normality guarantee of the debiased estimator.
- Score: 20.828160401904697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven sequential decision has found a wide range of applications in
modern operations management, such as dynamic pricing, inventory control, and
assortment optimization. Most existing research on data-driven sequential
decision focuses on designing an online policy to maximize the revenue.
However, the research on uncertainty quantification on the underlying true
model function (e.g., demand function), a critical problem for practitioners,
has not been well explored. In this paper, using the problem of demand function
prediction in dynamic pricing as the motivating example, we study the problem
of constructing accurate confidence intervals for the demand function. The main
challenge is that sequentially collected data leads to significant
distributional bias in the maximum likelihood estimator or the empirical risk
minimization estimate, making classical statistics approaches such as the
Wald's test no longer valid. We address this challenge by developing a debiased
approach and provide the asymptotic normality guarantee of the debiased
estimator. Based this the debiased estimator, we provide both point-wise and
uniform confidence intervals of the demand function.
Related papers
- Deep Generative Demand Learning for Newsvendor and Pricing [7.594251468240168]
We consider data-driven inventory and pricing decisions in the feature-based newsvendor problem.
We propose a novel approach leveraging conditional deep generative models (cDGMs) to address these challenges.
We provide theoretical guarantees for our approach, including the consistency of profit estimation and convergence of our decisions to the optimal solution.
arXiv Detail & Related papers (2024-11-13T14:17:26Z) - Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning [5.318766629972959]
Uncertainty quantification is a crucial but challenging task in many high-dimensional regression or learning problems.
We develop a new data-driven approach for UQ in regression that applies both to classical regression approaches as well as to neural networks.
arXiv Detail & Related papers (2024-07-18T16:42:10Z) - Pessimistic Causal Reinforcement Learning with Mediators for Confounded Offline Data [17.991833729722288]
We propose a novel policy learning algorithm, PESsimistic CAusal Learning (PESCAL)
Our key observation is that, by incorporating auxiliary variables that mediate the effect of actions on system dynamics, it is sufficient to learn a lower bound of the mediator distribution function, instead of the Q-function.
We provide theoretical guarantees for the algorithms we propose, and demonstrate their efficacy through simulations, as well as real-world experiments utilizing offline datasets from a leading ride-hailing platform.
arXiv Detail & Related papers (2024-03-18T14:51:19Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Loss Functions for Discrete Contextual Pricing with Observational Data [8.661128420558349]
We study a pricing setting where each customer is offered a contextualized price based on customer and/or product features.
We observe whether each customer purchased a product at the price prescribed rather than the customer's true valuation.
arXiv Detail & Related papers (2021-11-18T20:12:57Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z) - Regularized Online Allocation Problems: Fairness and Beyond [7.433931244705934]
We introduce the emphregularized online allocation problem, a variant that includes a non-linear regularizer acting on the total resource consumption.
In this problem, requests repeatedly arrive over time and, for each request, a decision maker needs to take an action that generates a reward and consumes resources.
The objective is to simultaneously maximize additively separable rewards and the value of a non-separable regularizer subject to the resource constraints.
arXiv Detail & Related papers (2020-07-01T14:24:58Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.