Towards participatory multi-modeling for policy support across domains
and scales: a systematic procedure for integral multi-model design
- URL: http://arxiv.org/abs/2402.06228v1
- Date: Fri, 9 Feb 2024 07:35:40 GMT
- Title: Towards participatory multi-modeling for policy support across domains
and scales: a systematic procedure for integral multi-model design
- Authors: Vittorio Nespeca (1 and 2 and 3), Rick Quax (1 and 2), Marcel G. M.
Olde Rikkert (4), Hubert P. L. M. Korzilius (5), Vincent A. W. J. Marchau
(5), Sophie Hadijsotiriou (4), Tom Oreel (4), Jannie Coenen (5), Heiman
Wertheim (6), Alexey Voinov (7), Eti\"enne A.J.A. Rouwette (5), V\'itor V.
Vasconcelos (1 and 2 and 8) ((1) Computational Science Lab - University of
Amsterdam, (2) POLDER - Institute for Advanced Study - University of
Amsterdam, (3) Faculty of Technology Policy and Management - Delft University
of Technology, (4) Department Geriatrics - Radboud University Medical Center,
(5) Institute for Management Research - Radboud University, (6) Department
Medical Microbiology - Radboud University Medical Center, (7) Faculty of
Engineering Technology - Twente University, (8) Centre for Urban Mental
Health - University of Amsterdam)
- Abstract summary: Policymaking for complex challenges such as pandemics requires consideration of intricate implications across multiple domains and scales.
Integral multi-models can be assembled from existing computational models or be designed conceptually as a whole.
This article introduces a procedure for developing multi-models with an integral approach based on clearly defined domain knowledge requirements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Policymaking for complex challenges such as pandemics necessitates the
consideration of intricate implications across multiple domains and scales.
Computational models can support policymaking, but a single model is often
insufficient for such multidomain and scale challenges. Multi-models comprising
several interacting computational models at different scales or relying on
different modeling paradigms offer a potential solution. Such multi-models can
be assembled from existing computational models (i.e., integrated modeling) or
be designed conceptually as a whole before their computational implementation
(i.e., integral modeling). Integral modeling is particularly valuable for novel
policy problems, such as those faced in the early stages of a pandemic, where
relevant models may be unavailable or lack standard documentation. Designing
such multi-models through an integral approach is, however, a complex task
requiring the collaboration of modelers and experts from various domains. In
this collaborative effort, modelers must precisely define the domain knowledge
needed from experts and establish a systematic procedure for translating such
knowledge into a multi-model. Yet, these requirements and systematic procedures
are currently lacking for multi-models that are both multiscale and
multi-paradigm. We address this challenge by introducing a procedure for
developing multi-models with an integral approach based on clearly defined
domain knowledge requirements derived from literature. We illustrate this
procedure using the case of school closure policies in the Netherlands during
the COVID-19 pandemic, revealing their potential implications in the short and
long term and across the healthcare and educational domains. The requirements
and procedure provided in this article advance the application of integral
multi-modeling for policy support in multiscale and multidomain contexts.
Related papers
- HEMM: Holistic Evaluation of Multimodal Foundation Models [91.60364024897653]
Multimodal foundation models can holistically process text alongside images, video, audio, and other sensory modalities.
It is challenging to characterize and study progress in multimodal foundation models, given the range of possible modeling decisions, tasks, and domains.
arXiv Detail & Related papers (2024-07-03T18:00:48Z) - From Efficient Multimodal Models to World Models: A Survey [28.780451336834876]
Multimodal Large Models (MLMs) are becoming a significant research focus combining powerful language models with multimodal learning.
This review explores the latest developments and challenges in large instructions, emphasizing their potential in achieving artificial general intelligence.
arXiv Detail & Related papers (2024-06-27T15:36:43Z) - Generalist Multimodal AI: A Review of Architectures, Challenges and Opportunities [5.22475289121031]
Multimodal models are expected to be a critical component to future advances in artificial intelligence.
This work provides a fresh perspective on generalist multimodal models via a novel architecture and training configuration specific taxonomy.
arXiv Detail & Related papers (2024-06-08T15:30:46Z) - Design Patterns for Multilevel Modeling and Simulation [3.0248879829045383]
Multilevel modeling and simulation (M&S) is becoming increasingly relevant due to the benefits that this methodology offers.
This paper presents a set of design patterns that provide a systematic approach for designing and implementing multilevel models.
arXiv Detail & Related papers (2024-03-25T12:51:22Z) - Multimodal Large Language Models: A Survey [36.06016060015404]
Multimodal language models integrate multiple data types, such as images, text, language, audio, and other heterogeneity.
This paper begins by defining the concept of multimodal and examining the historical development of multimodal algorithms.
A practical guide is provided, offering insights into the technical aspects of multimodal models.
Lastly, we explore the applications of multimodal models and discuss the challenges associated with their development.
arXiv Detail & Related papers (2023-11-22T05:15:12Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - MultiViz: An Analysis Benchmark for Visualizing and Understanding
Multimodal Models [103.9987158554515]
MultiViz is a method for analyzing the behavior of multimodal models by scaffolding the problem of interpretability into 4 stages.
We show that the complementary stages in MultiViz together enable users to simulate model predictions, assign interpretable concepts to features, perform error analysis on model misclassifications, and use insights from error analysis to debug models.
arXiv Detail & Related papers (2022-06-30T18:42:06Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - MetaQA: Combining Expert Agents for Multi-Skill Question Answering [49.35261724460689]
We argue that despite the promising results of multi-dataset models, some domains or QA formats might require specific architectures.
We propose to combine expert agents with a novel, flexible, and training-efficient architecture that considers questions, answer predictions, and answer-prediction confidence scores.
arXiv Detail & Related papers (2021-12-03T14:05:52Z) - An Ample Approach to Data and Modeling [1.0152838128195467]
We describe a framework for modeling how models can be built that integrates concepts and methods from a wide range of fields.
The reference M* meta model framework is presented, which relies critically in associating whole datasets and respective models in terms of a strict equivalence relation.
Several considerations about how the developed framework can provide insights about data clustering, complexity, collaborative research, deep learning, and creativity are then presented.
arXiv Detail & Related papers (2021-10-05T01:26:09Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.