Accelerating scientific discovery with the common task framework
- URL: http://arxiv.org/abs/2511.04001v1
- Date: Thu, 06 Nov 2025 02:53:07 GMT
- Title: Accelerating scientific discovery with the common task framework
- Authors: J. Nathan Kutz, Peter Battaglia, Michael Brenner, Kevin Carlberg, Aric Hagberg, Shirley Ho, Stephan Hoyer, Henning Lange, Hod Lipson, Michael W. Mahoney, Frank Noe, Max Welling, Laure Zanna, Francis Zhu, Steven L. Brunton,
- Abstract summary: Machine learning (ML) and artificial intelligence (AI) algorithms are transforming the characterization and control of dynamic systems in the engineering, physical, and biological sciences.<n>These emerging modeling paradigms require comparative metrics to evaluate a diverse set of scientific objectives.<n>We introduce a common task framework (CTF) for science and engineering, which features a growing collection of challenge data sets.
- Score: 48.92654976046941
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) and artificial intelligence (AI) algorithms are transforming and empowering the characterization and control of dynamic systems in the engineering, physical, and biological sciences. These emerging modeling paradigms require comparative metrics to evaluate a diverse set of scientific objectives, including forecasting, state reconstruction, generalization, and control, while also considering limited data scenarios and noisy measurements. We introduce a common task framework (CTF) for science and engineering, which features a growing collection of challenge data sets with a diverse set of practical and common objectives. The CTF is a critically enabling technology that has contributed to the rapid advance of ML/AI algorithms in traditional applications such as speech recognition, language processing, and computer vision. There is a critical need for the objective metrics of a CTF to compare the diverse algorithms being rapidly developed and deployed in practice today across science and engineering.
Related papers
- Artificial Intelligence in Materials Science and Engineering: Current Landscape, Key Challenges, and Future Trajectorie [0.28279056210896714]
AI is becoming an essential competency for materials researchers.<n>We survey the spectrum of machine learning approaches, including CNNs, GNNs, and Transformers, alongside emerging generative AI and probabilistic models.<n>The review also examines the pivotal role of data in this field, emphasizing how effective representation and featurization strategies underpin the performance of machine learning models.
arXiv Detail & Related papers (2026-01-18T19:36:10Z) - Generative AI and Empirical Software Engineering: A Paradigm Shift [9.284024538100063]
This vision paper examines how the integration of large language models into software engineering disrupts established research paradigms.<n>We discuss how it transforms the phenomena we study, the methods and theories we rely on, the data we analyze, and the threats to validity that arise in dynamic AI-mediated environments.<n>Our aim is to help the empirical software engineering community adapt its questions, instruments, and validation standards to a future in which AI systems are not merely tools, but active collaborators shaping software engineering and its study.
arXiv Detail & Related papers (2025-02-12T04:13:07Z) - Deep Learning Models for Physical Layer Communications [3.1727619150610837]
This thesis aims at solving some fundamental open challenges in physical layer communications exploiting new deep learning paradigms.<n>We mathematically formulate, under ML terms, classic problems such as channel capacity and optimal coding-decoding schemes.<n>We design and develop the architecture, algorithm and code necessary to train the equivalent deep learning model.
arXiv Detail & Related papers (2025-02-07T13:03:36Z) - Deep Learning and Machine Learning -- Object Detection and Semantic Segmentation: From Theory to Applications [17.571124565519263]
In-depth exploration of object detection and semantic segmentation is provided.<n>State-of-the-art advancements in machine learning and deep learning are reviewed.<n>Analysis of big data processing is presented.
arXiv Detail & Related papers (2024-10-21T02:10:49Z) - EndToEndML: An Open-Source End-to-End Pipeline for Machine Learning Applications [0.2826977330147589]
We propose a web-based end-to-end pipeline that is capable of preprocessing, training, evaluating, and visualizing machine learning models.
Our library assists in recognizing, classifying, clustering, and predicting a wide range of multi-modal, multi-sensor datasets.
arXiv Detail & Related papers (2024-03-27T02:24:38Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems [46.760568562468606]
Few-shot meta-learning optimize learning algorithms that can efficiently adapt to new tasks quickly.
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
arXiv Detail & Related papers (2022-10-03T17:15:36Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.