DDKSP: A Data-Driven Stochastic Programming Framework for Car-Sharing
Relocation Problem
- URL: http://arxiv.org/abs/2001.08109v1
- Date: Mon, 20 Jan 2020 19:04:29 GMT
- Title: DDKSP: A Data-Driven Stochastic Programming Framework for Car-Sharing
Relocation Problem
- Authors: Xiaoming Li, Chun Wang, Xiao Huang
- Abstract summary: We investigate the car-sharing relocation problem (CSRP) under uncertain demands.
In order to overcome the problem, an innovative framework called Data-Driven Kernel Programming (DDKSP) is proposed.
The proposed framework outperforms the pure parametric approaches with 3.72%, 4.58% and 11% in terms of overall profits.
- Score: 17.440172040605354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Car-sharing issue is a popular research field in sharing economy. In this
paper, we investigate the car-sharing relocation problem (CSRP) under uncertain
demands. Normally, the real customer demands follow complicating probability
distribution which cannot be described by parametric approaches. In order to
overcome the problem, an innovative framework called Data-Driven Kernel
Stochastic Programming (DDKSP) that integrates a non-parametric approach -
kernel density estimation (KDE) and a two-stage stochastic programming (SP)
model is proposed. Specifically, the probability distributions are derived from
historical data by KDE, which are used as the input uncertain parameters for
SP. Additionally, the CSRP is formulated as a two-stage SP model. Meanwhile, a
Monte Carlo method called sample average approximation (SAA) and Benders
decomposition algorithm are introduced to solve the large-scale optimization
model. Finally, the numerical experimental validations which are based on New
York taxi trip data sets show that the proposed framework outperforms the pure
parametric approaches including Gaussian, Laplace and Poisson distributions
with 3.72% , 4.58% and 11% respectively in terms of overall profits.
Related papers
- Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference [55.150117654242706]
We show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.
As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty.
arXiv Detail & Related papers (2024-11-01T21:11:48Z) - A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty [1.8416014644193066]
We propose a data-driven, closure model for Reynolds-averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty.
A fully Bayesian formulation is proposed, combined with a sparsity-inducing prior in order to identify regions in the problem domain where the parametric closure is insufficient.
arXiv Detail & Related papers (2023-07-05T16:53:31Z) - Chance-Constrained Multiple-Choice Knapsack Problem: Model, Algorithms,
and Applications [38.98556852157875]
We focus on the practical scenario of CCMCKP, where the probability distributions of random weights are unknown but only sample data is available.
To solve CCMCKP, we propose a data-driven adaptive local search (DDALS) algorithm.
arXiv Detail & Related papers (2023-06-26T13:35:05Z) - Online Probabilistic Model Identification using Adaptive Recursive MCMC [8.465242072268019]
We suggest the Adaptive Recursive Markov Chain Monte Carlo (ARMCMC) method.
It eliminates the shortcomings of conventional online techniques while computing the entire probability density function of model parameters.
We demonstrate our approach using parameter estimation in a soft bending actuator and the Hunt-Crossley dynamic model.
arXiv Detail & Related papers (2022-10-23T02:06:48Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Learning Summary Statistics for Bayesian Inference with Autoencoders [58.720142291102135]
We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
arXiv Detail & Related papers (2022-01-28T12:00:31Z) - Solving the non-preemptive two queue polling model with generally
distributed service and switch-over durations and Poisson arrivals as a
Semi-Markov Decision Process [0.0]
The polling system with switch-over durations is a useful model with several practical applications.
It is classified as a Discrete Event Dynamic System (DEDS) for which no one agreed upon modelling approach exists.
This paper presents a Semi-Markov Decision Process (SMDP) formulation of the polling system as to introduce additional modelling power.
arXiv Detail & Related papers (2021-12-13T11:40:55Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z) - Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian
Monte Carlo [8.94392435424862]
Decentralized SGLD (DE-SGLD) and Decentralized SGHMC (DE-SGHMC) are algorithms for scaleable Bayesian inference in the decentralized setting for large datasets.
We show that when the posterior distribution is strongly log-concave and smooth, the iterates of these algorithms converge linearly to a neighborhood of the target distribution in the 2-Wasserstein distance.
arXiv Detail & Related papers (2020-07-01T16:26:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.