Is a model equivalent to its computer implementation?
- URL: http://arxiv.org/abs/2402.15364v1
- Date: Fri, 23 Feb 2024 14:54:40 GMT
- Title: Is a model equivalent to its computer implementation?
- Authors: Beatrix C. Hiesmayr and Marc-Thorsten H\"utt
- Abstract summary: We argue that even in widely used models the causal link between the (formal) mathematical model and the set of results is no longer certain.
A new perspective on this topic stems from the accelerating trend that in some branches of research only implemented models are used.
- Score: 0.021756081703276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A recent trend in mathematical modeling is to publish the computer code
together with the research findings. Here we explore the formal question,
whether and in which sense a computer implementation is distinct from the
mathematical model. We argue that, despite the convenience of implemented
models, a set of implicit assumptions is perpetuated with the implementation to
the extent that even in widely used models the causal link between the (formal)
mathematical model and the set of results is no longer certain. Moreover, code
publication is often seen as an important contributor to reproducible research,
we suggest that in some cases the opposite may be true. A new perspective on
this topic stems from the accelerating trend that in some branches of research
only implemented models are used, e.g., in artificial intelligence (AI). With
the advent of quantum computers we argue that completely novel challenges arise
in the distinction between models and implementations.
Related papers
- Unlocking the Potential of Past Research: Using Generative AI to Reconstruct Healthcare Simulation Models [0.0]
This study explores the feasibility of using generative artificial intelligence (AI) to recreate published models using Free and Open Source Software (FOSS)
We successfully generated, tested and internally reproduced two DES models, including user interfaces.
The reported results were replicated for one model, but not the other, likely due to missing information on distributions.
arXiv Detail & Related papers (2025-03-27T16:10:02Z) - Spatio-Temporal Graphical Counterfactuals: An Overview [11.616701619068804]
Counteractual is a critical yet challenging topic for artificial intelligence to learn knowledge from data.
Our aim is to investigate a survey to compare thinking and discuss different counterfactual models, theories and approaches.
arXiv Detail & Related papers (2024-07-02T01:34:13Z) - Diffusion Models for Generative Artificial Intelligence: An Introduction
for Applied Mathematicians [3.069335774032178]
Diffusion models offer state of the art performance in generative AI for images.
We provide a brief introduction to diffusion models for applied mathematicians and statisticians.
arXiv Detail & Related papers (2023-12-21T20:20:52Z) - Generative Models as a Complex Systems Science: How can we make sense of
large language model behavior? [75.79305790453654]
Coaxing out desired behavior from pretrained models, while avoiding undesirable ones, has redefined NLP.
We argue for a systematic effort to decompose language model behavior into categories that explain cross-task performance.
arXiv Detail & Related papers (2023-07-31T22:58:41Z) - Explanation-by-Example Based on Item Response Theory [0.0]
This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach.
From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable.
arXiv Detail & Related papers (2022-10-04T14:36:33Z) - Geometric and Topological Inference for Deep Representations of Complex
Networks [13.173307471333619]
We present a class of statistics that emphasize the topology as well as the geometry of representations.
We evaluate these statistics in terms of the sensitivity and specificity that they afford when used for model selection.
These new methods enable brain and computer scientists to visualize the dynamic representational transformations learned by brains and models.
arXiv Detail & Related papers (2022-03-10T17:14:14Z) - Counterfactual Explanations for Models of Code [11.678590247866534]
Machine learning (ML) models play an increasingly prevalent role in many software engineering tasks.
It can be difficult for developers to understand why the model came to a certain conclusion and how to act upon the model's prediction.
This paper explores counterfactual explanations for models of source code.
arXiv Detail & Related papers (2021-11-10T14:44:19Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Design of Dynamic Experiments for Black-Box Model Discrimination [72.2414939419588]
Consider a dynamic model discrimination setting where we wish to chose: (i) what is the best mechanistic, time-varying model and (ii) what are the best model parameter estimates.
For rival mechanistic models where we have access to gradient information, we extend existing methods to incorporate a wider range of problem uncertainty.
We replace these black-box models with Gaussian process surrogate models and thereby extend the model discrimination setting to additionally incorporate rival black-box model.
arXiv Detail & Related papers (2021-02-07T11:34:39Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.