Seeking Truth and Beauty in Flavor Physics with Machine Learning
- URL: http://arxiv.org/abs/2311.00087v1
- Date: Tue, 31 Oct 2023 18:53:22 GMT
- Title: Seeking Truth and Beauty in Flavor Physics with Machine Learning
- Authors: Konstantin T. Matchev, Katia Matcheva, Pierre Ramond, Sarunas Verner
- Abstract summary: We design loss functions for performing both of those tasks with machine learning techniques.
We use the Yukawa quark sector as a toy example to demonstrate that the optimization of these loss functions results in true and beautiful models.
- Score: 1.8434042562191815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The discovery process of building new theoretical physics models involves the
dual aspect of both fitting to the existing experimental data and satisfying
abstract theorists' criteria like beauty, naturalness, etc. We design loss
functions for performing both of those tasks with machine learning techniques.
We use the Yukawa quark sector as a toy example to demonstrate that the
optimization of these loss functions results in true and beautiful models.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Cliqueformer: Model-Based Optimization with Structured Transformers [102.55764949282906]
We develop a model that learns the structure of an MBO task and empirically leads to improved designs.
We evaluate Cliqueformer on various tasks, ranging from high-dimensional black-box functions to real-world tasks of chemical and genetic design.
arXiv Detail & Related papers (2024-10-17T00:35:47Z) - SimFair: Physics-Guided Fairness-Aware Learning with Simulation Models [22.521850023693833]
In many cases, inequity in performance is due to the change in distribution over different regions.
We propose SimFair, a physics-guided fairness-aware learning framework.
arXiv Detail & Related papers (2024-01-27T02:36:30Z) - Exploring the Truth and Beauty of Theory Landscapes with Machine
Learning [1.8434042562191815]
We use the Yukawa quark sector as a toy example to demonstrate how both of those tasks can be accomplished with machine learning techniques.
We propose loss minimization functions whose results in true models that are also beautiful as measured by three different criteria - uniformity, sparsity, or symmetry.
arXiv Detail & Related papers (2024-01-21T14:52:39Z) - INFINITY: Neural Field Modeling for Reynolds-Averaged Navier-Stokes
Equations [13.242926257057084]
INFINITY is a deep learning model that encodes geometric information and physical fields into compact representations.
Our framework achieves state-of-the-art performance by accurately inferring physical fields throughout the volume and surface.
Our model can correctly predict drag and lift coefficients while adhering to the equations.
arXiv Detail & Related papers (2023-07-25T14:35:55Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Applying Machine Learning to Study Fluid Mechanics [0.696194614504832]
This paper provides a short overview of how to use machine learning to build data-driven models in fluid mechanics.
At each stage, we discuss how prior physical knowledge may be embedding into the process, with specific examples from the field of fluid mechanics.
arXiv Detail & Related papers (2021-10-05T14:30:24Z) - Knowledge distillation: A good teacher is patient and consistent [71.14922743774864]
There is a growing discrepancy in computer vision between large-scale models that achieve state-of-the-art performance and models that are affordable in practical applications.
We identify certain implicit design choices, which may drastically affect the effectiveness of distillation.
We obtain a state-of-the-art ResNet-50 model for ImageNet, which achieves 82.8% top-1 accuracy.
arXiv Detail & Related papers (2021-06-09T17:20:40Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Gradient-Based Training and Pruning of Radial Basis Function Networks
with an Application in Materials Physics [0.24792948967354234]
We propose a gradient-based technique for training radial basis function networks with an efficient and scalable open-source implementation.
We derive novel closed-form optimization criteria for pruning the models for continuous as well as binary data.
arXiv Detail & Related papers (2020-04-06T11:32:37Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.