Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation
- URL: http://arxiv.org/abs/2006.08820v1
- Date: Mon, 15 Jun 2020 23:29:04 GMT
- Title: Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation
- Authors: Fernando Santos, Ingrid Nunes, Ana L. C. Bazzan
- Abstract summary: This paper compares the use of MDD and ABMS platforms in terms of effort and developer mistakes.
The obtained results show that MDD4ABMS requires less effort to develop simulations with similar (sometimes better) design quality than NetLogo.
- Score: 80.49040344355431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The agent-based modeling and simulation (ABMS) paradigm has been used to
analyze, reproduce, and predict phenomena related to many application areas.
Although there are many agent-based platforms that support simulation
development, they rely on programming languages that require extensive
programming knowledge. Model-driven development (MDD) has been explored to
facilitate simulation modeling, by means of high-level modeling languages that
provide reusable building blocks that hide computational complexity, and code
generation. However, there is still limited knowledge of how MDD approaches to
ABMS contribute to increasing development productivity and quality. We thus in
this paper present an empirical study that quantitatively compares the use of
MDD and ABMS platforms mainly in terms of effort and developer mistakes. Our
evaluation was performed using MDD4ABMS-an MDD approach with a core and
extensions to two application areas, one of which developed for this study-and
NetLogo, a widely used platform. The obtained results show that MDD4ABMS
requires less effort to develop simulations with similar (sometimes better)
design quality than NetLogo, giving evidence of the benefits that MDD can
provide to ABMS.
Related papers
- Mixture-of-Instructions: Comprehensive Alignment of a Large Language Model through the Mixture of Diverse System Prompting Instructions [7.103987978402038]
We introduce a novel technique termed Mixture-of-Instructions (MoI)
MoI employs a strategy of instruction concatenation combined with diverse system prompts to boost the alignment efficiency of language models.
Our methodology was applied to the open-source Qwen-7B-chat model, culminating in the development of Qwen-SFT-MoI.
arXiv Detail & Related papers (2024-04-29T03:58:12Z) - Promising and worth-to-try future directions for advancing
state-of-the-art surrogates methods of agent-based models in social and
health computational sciences [0.0]
Execution and runtime performance of model-based analysis tools for realistic large-scale ABMs can be excessively long.
The main aim of this ad-hoc brief report is to highlight some of surrogate models that were adequate and computationally less demanding for nonlinear dynamical models.
arXiv Detail & Related papers (2024-03-07T11:30:56Z) - Model Composition for Multimodal Large Language Models [73.70317850267149]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - A Survey for Foundation Models in Autonomous Driving [11.726604658478152]
Large language models contribute to planning and simulation in autonomous driving.
vision foundation models are increasingly adapted for critical tasks such as 3D object detection and tracking.
Multi-modal foundation models, integrating diverse inputs, exhibit exceptional visual understanding and spatial reasoning.
arXiv Detail & Related papers (2024-02-02T02:44:59Z) - MMToM-QA: Multimodal Theory of Mind Question Answering [80.87550820953236]
Theory of Mind (ToM) is an essential ingredient for developing machines with human-level social intelligence.
Recent machine learning models, particularly large language models, seem to show some aspects of ToM understanding.
Human ToM, on the other hand, is more than video or text understanding.
People can flexibly reason about another person's mind based on conceptual representations extracted from any available data.
arXiv Detail & Related papers (2024-01-16T18:59:24Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Facilitating automated conversion of scientific knowledge into
scientific simulation models with the Machine Assisted Generation,
Calibration, and Comparison (MAGCC) Framework [0.0]
The Machine Assisted Generation, Comparison, and Computational (MAGCC) framework provides machine assistance and automation of recurrent crucial steps and processes.
MAGCC bridges systems for knowledge extraction via natural language processing or extracted from existing mathematical models.
The MAGCC framework can be customized any scientific domain, and future work will integrate newly developed code-generating AI systems.
arXiv Detail & Related papers (2022-04-21T19:30:50Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Sim-Env: Decoupling OpenAI Gym Environments from Simulation Models [0.0]
Reinforcement learning (RL) is one of the most active fields of AI research.
Development methodology still lags behind, with a severe lack of standard APIs to foster the development of RL applications.
We present a workflow and tools for the decoupled development and maintenance of multi-purpose agent-based models and derived single-purpose reinforcement learning environments.
arXiv Detail & Related papers (2021-02-19T09:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.