Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation
- URL: http://arxiv.org/abs/2006.08820v1
- Date: Mon, 15 Jun 2020 23:29:04 GMT
- Title: Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation
- Authors: Fernando Santos, Ingrid Nunes, Ana L. C. Bazzan
- Abstract summary: This paper compares the use of MDD and ABMS platforms in terms of effort and developer mistakes.
The obtained results show that MDD4ABMS requires less effort to develop simulations with similar (sometimes better) design quality than NetLogo.
- Score: 80.49040344355431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The agent-based modeling and simulation (ABMS) paradigm has been used to
analyze, reproduce, and predict phenomena related to many application areas.
Although there are many agent-based platforms that support simulation
development, they rely on programming languages that require extensive
programming knowledge. Model-driven development (MDD) has been explored to
facilitate simulation modeling, by means of high-level modeling languages that
provide reusable building blocks that hide computational complexity, and code
generation. However, there is still limited knowledge of how MDD approaches to
ABMS contribute to increasing development productivity and quality. We thus in
this paper present an empirical study that quantitatively compares the use of
MDD and ABMS platforms mainly in terms of effort and developer mistakes. Our
evaluation was performed using MDD4ABMS-an MDD approach with a core and
extensions to two application areas, one of which developed for this study-and
NetLogo, a widely used platform. The obtained results show that MDD4ABMS
requires less effort to develop simulations with similar (sometimes better)
design quality than NetLogo, giving evidence of the benefits that MDD can
provide to ABMS.
Related papers
- A Survey on Multimodal Benchmarks: In the Era of Large AI Models [13.299775710527962]
Multimodal Large Language Models (MLLMs) have brought substantial advancements in artificial intelligence.
This survey systematically reviews 211 benchmarks that assess MLLMs across four core domains: understanding, reasoning, generation, and application.
arXiv Detail & Related papers (2024-09-21T15:22:26Z) - MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct [148.39859547619156]
We propose MMEvol, a novel multimodal instruction data evolution framework.
MMEvol iteratively improves data quality through a refined combination of fine-grained perception, cognitive reasoning, and interaction evolution.
Our approach reaches state-of-the-art (SOTA) performance in nine tasks using significantly less data compared to state-of-the-art models.
arXiv Detail & Related papers (2024-09-09T17:44:00Z) - Text2BIM: Generating Building Models Using a Large Language Model-based Multi-Agent Framework [0.3749861135832073]
Text2 BIM is a multi-agent framework that generates 3D building models from natural language instructions.
A rule-based model checker is introduced into the agentic workflow to guide the LLM agents in resolving issues within the generated models.
The framework can effectively generate high-quality, structurally rational building models that are aligned with the abstract concepts specified by user input.
arXiv Detail & Related papers (2024-08-15T09:48:45Z) - Mixture-of-Instructions: Comprehensive Alignment of a Large Language Model through the Mixture of Diverse System Prompting Instructions [7.103987978402038]
We introduce a novel technique termed Mixture-of-Instructions (MoI)
MoI employs a strategy of instruction concatenation combined with diverse system prompts to boost the alignment efficiency of language models.
Our methodology was applied to the open-source Qwen-7B-chat model, culminating in the development of Qwen-SFT-MoI.
arXiv Detail & Related papers (2024-04-29T03:58:12Z) - Promising and worth-to-try future directions for advancing
state-of-the-art surrogates methods of agent-based models in social and
health computational sciences [0.0]
Execution and runtime performance of model-based analysis tools for realistic large-scale ABMs can be excessively long.
The main aim of this ad-hoc brief report is to highlight some of surrogate models that were adequate and computationally less demanding for nonlinear dynamical models.
arXiv Detail & Related papers (2024-03-07T11:30:56Z) - Model Composition for Multimodal Large Language Models [71.5729418523411]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - MMToM-QA: Multimodal Theory of Mind Question Answering [80.87550820953236]
Theory of Mind (ToM) is an essential ingredient for developing machines with human-level social intelligence.
Recent machine learning models, particularly large language models, seem to show some aspects of ToM understanding.
Human ToM, on the other hand, is more than video or text understanding.
People can flexibly reason about another person's mind based on conceptual representations extracted from any available data.
arXiv Detail & Related papers (2024-01-16T18:59:24Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Sim-Env: Decoupling OpenAI Gym Environments from Simulation Models [0.0]
Reinforcement learning (RL) is one of the most active fields of AI research.
Development methodology still lags behind, with a severe lack of standard APIs to foster the development of RL applications.
We present a workflow and tools for the decoupled development and maintenance of multi-purpose agent-based models and derived single-purpose reinforcement learning environments.
arXiv Detail & Related papers (2021-02-19T09:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.