The Lost Art of Mathematical Modelling
- URL: http://arxiv.org/abs/2301.08559v2
- Date: Fri, 2 Jun 2023 09:03:19 GMT
- Title: The Lost Art of Mathematical Modelling
- Authors: Linn\'ea Gyllingberg, Abeba Birhane, and David J.T. Sumpter
- Abstract summary: We argue that researchers currently focus too much on activity (2) at the cost of (1).
This trend, we propose, can be reversed by realising that any given biological phenomena can be modelled in an infinite number of different ways.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide a critique of mathematical biology in light of rapid developments
in modern machine learning. We argue that out of the three modelling activities
-- (1) formulating models; (2) analysing models; and (3) fitting or comparing
models to data -- inherent to mathematical biology, researchers currently focus
too much on activity (2) at the cost of (1). This trend, we propose, can be
reversed by realising that any given biological phenomena can be modelled in an
infinite number of different ways, through the adoption of an open/pluralistic
approach. We explain the open approach using fish locomotion as a case study
and illustrate some of the pitfalls -- universalism, creating models of models,
etc. -- that hinder mathematical biology. We then ask how we might rediscover a
lost art: that of creative mathematical modelling.
This article is dedicated to the memory of Edmund Crampin.
Related papers
- PuzzleVQA: Diagnosing Multimodal Reasoning Challenges of Language Models with Abstract Visual Patterns [69.17409440805498]
We evaluate large multimodal models with abstract patterns based on fundamental concepts.
We find that they are not able to generalize well to simple abstract patterns.
Our systematic analysis finds that the main bottlenecks of GPT-4V are weaker visual perception and inductive reasoning abilities.
arXiv Detail & Related papers (2024-03-20T05:37:24Z) - Unified View of Grokking, Double Descent and Emergent Abilities: A
Perspective from Circuits Competition [83.13280812128411]
Recent studies have uncovered intriguing phenomena in deep learning, such as grokking, double descent, and emergent abilities in large language models.
We present a comprehensive framework that provides a unified view of these three phenomena, focusing on the competition between memorization and generalization circuits.
arXiv Detail & Related papers (2024-02-23T08:14:36Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Discovering interpretable models of scientific image data with deep
learning [0.0]
We implement representation learning, sparse deep neural network training and symbolic regression.
We demonstrate their relevance to the field of bioimaging using a well-studied test problem of classifying cell states in microscopy data.
We explore the utility of such interpretable models in producing scientific explanations of the underlying biological phenomenon.
arXiv Detail & Related papers (2024-02-05T15:45:55Z) - Exploring the Truth and Beauty of Theory Landscapes with Machine
Learning [1.8434042562191815]
We use the Yukawa quark sector as a toy example to demonstrate how both of those tasks can be accomplished with machine learning techniques.
We propose loss minimization functions whose results in true models that are also beautiful as measured by three different criteria - uniformity, sparsity, or symmetry.
arXiv Detail & Related papers (2024-01-21T14:52:39Z) - Eight challenges in developing theory of intelligence [3.0349733976070024]
A good theory of mathematical beauty is more practical than any current observation, as new predictions of physical reality can be verified self-consistently.
Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm.
arXiv Detail & Related papers (2023-06-20T01:45:42Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Solving Quantitative Reasoning Problems with Language Models [53.53969870599973]
We introduce Minerva, a large language model pretrained on general natural language data and further trained on technical content.
The model achieves state-of-the-art performance on technical benchmarks without the use of external tools.
We also evaluate our model on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other sciences.
arXiv Detail & Related papers (2022-06-29T18:54:49Z) - Algebraic Learning: Towards Interpretable Information Modeling [0.0]
This thesis addresses the issue of interpretability in general information modeling and endeavors to ease the problem from two scopes.
Firstly, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally.
Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system.
arXiv Detail & Related papers (2022-03-13T15:53:39Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - A Bayesian machine scientist to aid in the solution of challenging
scientific problems [0.0]
We introduce a Bayesian machine scientist, which establishes the plausibility of models using explicit approximations to the exact marginal posterior over models.
It explores the space of models using Markov chain Monte Carlo.
We show that this approach uncovers accurate models for synthetic and real data and provides out-of-sample predictions that are more accurate than those of existing approaches.
arXiv Detail & Related papers (2020-04-25T14:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.