A Bayesian Optimization approach for calibrating large-scale
activity-based transport models
- URL: http://arxiv.org/abs/2302.03480v1
- Date: Tue, 7 Feb 2023 14:09:41 GMT
- Title: A Bayesian Optimization approach for calibrating large-scale
activity-based transport models
- Authors: Serio Agriesti, Vladimir Kuzmanovski, Jaakko Hollm\'en, Claudio
Roncoli and Bat-hen Nahmias-Biran
- Abstract summary: Agent-Based and Activity-Based modeling in transportation is rising due to the capability of addressing complex applications.
This paper proposes a novel Bayesian Optimization approach incorporating a surrogate model in the form of an improved Random Forest.
The proposed method is tested on a case study for the city of Tallinn, Estonia, where the model to be calibrated consists of 477 behavioral parameters.
- Score: 2.6931502677545947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of Agent-Based and Activity-Based modeling in transportation is
rising due to the capability of addressing complex applications such as
disruptive trends (e.g., remote working and automation) or the design and
assessment of disaggregated management strategies. Still, the broad adoption of
large-scale disaggregate models is not materializing due to the inherently high
complexity and computational needs. Activity-based models focused on behavioral
theory, for example, may involve hundreds of parameters that need to be
calibrated to match the detailed socio-economical characteristics of the
population for any case study. This paper tackles this issue by proposing a
novel Bayesian Optimization approach incorporating a surrogate model in the
form of an improved Random Forest, designed to automate the calibration process
of the behavioral parameters. The proposed method is tested on a case study for
the city of Tallinn, Estonia, where the model to be calibrated consists of 477
behavioral parameters, using the SimMobility MT software. Satisfactory
performance is achieved in the major indicators defined for the calibration
process: the error for the overall number of trips is equal to 4% and the
average error in the OD matrix is 15.92 vehicles per day.
Related papers
- Modèles de Substitution pour les Modèles à base d'Agents : Enjeux, Méthodes et Applications [0.0]
Agent-based models (ABM) are widely used to study emergent phenomena arising from local interactions.<n>The complexity of ABM limits their feasibility for real-time decision-making and large-scale scenario analysis.<n>To address these limitations, surrogate models offer an efficient alternative by learning approximations from sparse simulation data.
arXiv Detail & Related papers (2025-05-17T08:55:33Z) - Bayesian Experimental Design for Model Discrepancy Calibration: An Auto-Differentiable Ensemble Kalman Inversion Approach [0.0]
We propose a hybrid BED framework enabled by auto-differentiable ensemble Kalman inversion (AD-EKI)
We iteratively optimize experimental designs, decoupling the inference of low-dimensional physical parameters handled by standard BED methods.
The proposed method is studied by a classical convection-diffusion BED example.
arXiv Detail & Related papers (2025-04-29T00:10:45Z) - Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging [75.93960998357812]
Deep model merging represents an emerging research direction that combines multiple fine-tuned models to harness their capabilities across different tasks and domains.
Current model merging techniques focus on merging all available models simultaneously, with weight matrices-based methods being the predominant approaches.
We propose a training-free projection-based continual merging method that processes models sequentially.
arXiv Detail & Related papers (2025-01-16T13:17:24Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Automatic Gradient Estimation for Calibrating Crowd Models with Discrete Decision Making [0.0]
gradients governing the choice in candidate solutions are calculated from sampled simulation trajectories.
We consider the calibration of force-based crowd evacuation models based on the popular Social Force model.
arXiv Detail & Related papers (2024-04-06T16:48:12Z) - Variational Inference of Parameters in Opinion Dynamics Models [9.51311391391997]
This work uses variational inference to estimate the parameters of an opinion dynamics ABM.
We transform the inference process into an optimization problem suitable for automatic differentiation.
Our approach estimates both macroscopic (bounded confidence intervals and backfire thresholds) and microscopic ($200$ categorical, agent-level roles) more accurately than simulation-based and MCMC methods.
arXiv Detail & Related papers (2024-03-08T14:45:18Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Extending Process Discovery with Model Complexity Optimization and
Cyclic States Identification: Application to Healthcare Processes [62.997667081978825]
The paper presents an approach to process mining providing semi-automatic support to model optimization.
A model simplification approach is proposed, which essentially abstracts the raw model at the desired granularity.
We aim to demonstrate the capabilities of the technological solution using three datasets from different applications in the healthcare domain.
arXiv Detail & Related papers (2022-06-10T16:20:59Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Surrogate Assisted Methods for the Parameterisation of Agent-Based
Models [0.0]
calibration is a major challenge in agent-based modelling and simulation.
We propose an ABMS framework which facilitates the effective integration of different sampling methods and surrogate models.
arXiv Detail & Related papers (2020-08-26T21:47:02Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.