Contextual Stochastic Vehicle Routing with Time Windows
- URL: http://arxiv.org/abs/2402.06968v1
- Date: Sat, 10 Feb 2024 14:56:36 GMT
- Title: Contextual Stochastic Vehicle Routing with Time Windows
- Authors: Breno Serrano, Alexandre M. Florio, Stefan Minner, Maximilian
Schiffer, Thibaut Vidal
- Abstract summary: We study the vehicle routing problem with time windows (VRPTW) and travel times.
We introduce the contextual VRPTW, which minimizes the total transportation cost and expected late arrival penalties conditioned on the observed features.
We present novel data-driven prescriptive models that use historical data to provide an approximate solution to the problem.
- Score: 47.91283991228738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the vehicle routing problem with time windows (VRPTW) and stochastic
travel times, in which the decision-maker observes related contextual
information, represented as feature variables, before making routing decisions.
Despite the extensive literature on stochastic VRPs, the integration of feature
variables has received limited attention in this context. We introduce the
contextual stochastic VRPTW, which minimizes the total transportation cost and
expected late arrival penalties conditioned on the observed features. Since the
joint distribution of travel times and features is unknown, we present novel
data-driven prescriptive models that use historical data to provide an
approximate solution to the problem. We distinguish the prescriptive models
between point-based approximation, sample average approximation, and
penalty-based approximation, each taking a different perspective on dealing
with stochastic travel times and features. We develop specialized
branch-price-and-cut algorithms to solve these data-driven prescriptive models.
In our computational experiments, we compare the out-of-sample cost performance
of different methods on instances with up to one hundred customers. Our results
show that, surprisingly, a feature-dependent sample average approximation
outperforms existing and novel methods in most settings.
Related papers
- Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization [0.0]
We present an integrated learning and optimization procedure that yields the best approximation of an unknown situation.
Numerical results conducted with inventory problems from the literature as well as a bike-sharing problem with real data demonstrate that the proposed approach performs well.
arXiv Detail & Related papers (2024-11-05T21:54:50Z) - Contextual Linear Optimization with Bandit Feedback [35.692428244561626]
Contextual linear optimization (CLO) uses predictive contextual features to reduce uncertainty in random cost coefficients.
We study a class of offline learning algorithms for CLO with bandit feedback.
We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate.
arXiv Detail & Related papers (2024-05-26T13:27:27Z) - Align Your Steps: Optimizing Sampling Schedules in Diffusion Models [63.927438959502226]
Diffusion models (DMs) have established themselves as the state-of-the-art generative modeling approach in the visual domain and beyond.
A crucial drawback of DMs is their slow sampling speed, relying on many sequential function evaluations through large neural networks.
We propose a general and principled approach to optimizing the sampling schedules of DMs for high-quality outputs.
arXiv Detail & Related papers (2024-04-22T18:18:41Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Data-Driven Sample Average Approximation with Covariate Information [0.0]
We study optimization for data-driven decision-making when we have observations of the uncertain parameters within the optimization model together with concurrent observations of coparametrics.
We investigate three data-driven frameworks that integrate a machine learning prediction model within a programming sample average approximation (SAA) for approximating the solution to this problem.
arXiv Detail & Related papers (2022-07-27T14:45:04Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Multi-modal anticipation of stochastic trajectories in a dynamic
environment with Conditional Variational Autoencoders [0.12183405753834559]
Short-term motion of nearby vehicles is not strictly limited to a set of single trajectories.
We propose to account for the multi-modality of the problem with use of Conditional Conditional Autoencoder (C-VAE) conditioned on an agent's past motion as well as a scene encoded with Capsule Network (CapsNet)
In addition, we demonstrate advantages of employing the Minimum over N generated samples and tries to minimise the loss with respect to the closest sample, effectively leading to more diverse predictions.
arXiv Detail & Related papers (2021-03-05T19:38:26Z) - Autoregressive Denoising Diffusion Models for Multivariate Probabilistic
Time Series Forecasting [4.1573460459258245]
We use diffusion probabilistic models, a class of latent variable models closely connected to score matching and energy-based methods.
Our model learns gradients by optimizing a variational bound on the data likelihood and at inference time converts white noise into a sample of the distribution of interest.
arXiv Detail & Related papers (2021-01-28T15:46:10Z) - A Generative Learning Approach for Spatio-temporal Modeling in Connected
Vehicular Network [55.852401381113786]
This paper proposes LaMI (Latency Model Inpainting), a novel framework to generate a comprehensive-temporal quality framework for wireless access latency of connected vehicles.
LaMI adopts the idea from image inpainting and synthesizing and can reconstruct the missing latency samples by a two-step procedure.
In particular, it first discovers the spatial correlation between samples collected in various regions using a patching-based approach and then feeds the original and highly correlated samples into a Varienational Autocoder (VAE)
arXiv Detail & Related papers (2020-03-16T03:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.