Defining Foundation Models for Computational Science: A Call for Clarity and Rigor
- URL: http://arxiv.org/abs/2505.22904v2
- Date: Fri, 30 May 2025 16:21:57 GMT
- Title: Defining Foundation Models for Computational Science: A Call for Clarity and Rigor
- Authors: Youngsoo Choi, Siu Wun Cheung, Youngkyu Kim, Ping-Hsuan Tsai, Alejandro N. Diaz, Ivan Zanardi, Seung Whan Chung, Dylan Matthew Copeland, Coleman Kendrick, William Anderson, Traian Iliescu, Matthias Heinkenschloss,
- Abstract summary: We propose a formal definition of foundation models in computational science.<n>We articulate a set of essential and desirable characteristics that such models must exhibit.<n>We introduce the Data-Driven Finite Element Method (DD-FEM), a framework that fuses the modular structure of classical FEM with the representational power of data-driven learning.
- Score: 30.432877421232842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread success of foundation models in natural language processing and computer vision has inspired researchers to extend the concept to scientific machine learning and computational science. However, this position paper argues that as the term "foundation model" is an evolving concept, its application in computational science is increasingly used without a universally accepted definition, potentially creating confusion and diluting its precise scientific meaning. In this paper, we address this gap by proposing a formal definition of foundation models in computational science, grounded in the core values of generality, reusability, and scalability. We articulate a set of essential and desirable characteristics that such models must exhibit, drawing parallels with traditional foundational methods, like the finite element and finite volume methods. Furthermore, we introduce the Data-Driven Finite Element Method (DD-FEM), a framework that fuses the modular structure of classical FEM with the representational power of data-driven learning. We demonstrate how DD-FEM addresses many of the key challenges in realizing foundation models for computational science, including scalability, adaptability, and physics consistency. By bridging traditional numerical methods with modern AI paradigms, this work provides a rigorous foundation for evaluating and developing novel approaches toward future foundation models in computational science.
Related papers
- Machine Learned Force Fields: Fundamentals, its reach, and challenges [0.0]
Machine Learning Force Fields (MLFFs) have emerged as a revolutionary approach in computational chemistry and materials science.<n>This chapter provides an introduction of the fundamentals of learning and how it is applied to construct MLFFs.<n> Emphasis is placed on the construction of SchNet model, as one of the most elemental neural network-based force fields.
arXiv Detail & Related papers (2025-03-07T05:26:14Z) - Ten Challenging Problems in Federated Foundation Models [55.343738234307544]
Federated Foundation Models (FedFMs) represent a distributed learning paradigm that fuses general competences of foundation models as well as privacy-preserving capabilities of federated learning.<n>This paper provides a comprehensive summary of the ten challenging problems inherent in FedFMs, encompassing foundational theory, utilization of private data, continual learning, unlearning, Non-IID and graph data, bidirectional knowledge transfer, incentive mechanism design, game mechanism design, model watermarking, and efficiency.
arXiv Detail & Related papers (2025-02-14T04:01:15Z) - A Hybrid Virtual Element Method and Deep Learning Approach for Solving One-Dimensional Euler-Bernoulli Beams [0.0]
A hybrid framework integrating the Virtual Element Method (VEM) with deep learning is presented.<n>The primary aim is to explore a data-driven surrogate model capable of predicting fields across varying material displacement.<n>A neural network architecture is introduced to separately process nodal and material-specific data, effectively capturing complex interactions.
arXiv Detail & Related papers (2025-01-12T20:34:26Z) - Large Physics Models: Towards a collaborative approach with Large Language Models and Foundation Models [8.320153035338418]
This paper explores ideas and provides a potential roadmap for the development and evaluation of physics-specific large-scale AI models.<n>These models, based on foundation models such as Large Language Models (LLMs) are tailored to address the demands of physics research.
arXiv Detail & Related papers (2025-01-09T17:11:22Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning and Foundation Models [80.32412260877628]
We study how to learn human-interpretable concepts from data.<n> Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Graph Foundation Models: Concepts, Opportunities and Challenges [66.37994863159861]
Foundation models have emerged as critical components in a variety of artificial intelligence applications.<n>The capabilities of foundation models in generalization and adaptation motivate graph machine learning researchers to discuss the potential of developing a new graph learning paradigm.<n>This article introduces the concept of Graph Foundation Models (GFMs), and offers an exhaustive explanation of their key characteristics and underlying technologies.
arXiv Detail & Related papers (2023-10-18T09:31:21Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Thermodynamics-inspired Explanations of Artificial Intelligence [0.0]
We present Explainable Representations of AI and other black-box Paradigms (TERP)
TERP is a method for generating accurate, and human-interpretable explanations for black-box predictions in a model-agnostic manner.
To demonstrate the wide-ranging applicability of TERP, we successfully employ it to explain various black-box model architectures.
arXiv Detail & Related papers (2022-06-27T17:36:50Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z) - Physics-Guided Deep Learning for Dynamical Systems: A survey [5.733401663293044]
Traditional physics-based models are interpretable but rely on rigid assumptions.
Deep learning provides novel alternatives for efficiently recognizing complex patterns and emulating nonlinear dynamics.
It aims to take the best from both physics-based modeling and state-of-the-art DL models to better solve scientific problems.
arXiv Detail & Related papers (2021-07-02T20:59:03Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.