Exploring the Truth and Beauty of Theory Landscapes with Machine
Learning
- URL: http://arxiv.org/abs/2401.11513v1
- Date: Sun, 21 Jan 2024 14:52:39 GMT
- Title: Exploring the Truth and Beauty of Theory Landscapes with Machine
Learning
- Authors: Konstantin T. Matchev, Katia Matcheva, Pierre Ramond, Sarunas Verner
- Abstract summary: We use the Yukawa quark sector as a toy example to demonstrate how both of those tasks can be accomplished with machine learning techniques.
We propose loss minimization functions whose results in true models that are also beautiful as measured by three different criteria - uniformity, sparsity, or symmetry.
- Score: 1.8434042562191815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Theoretical physicists describe nature by i) building a theory model and ii)
determining the model parameters. The latter step involves the dual aspect of
both fitting to the existing experimental data and satisfying abstract criteria
like beauty, naturalness, etc. We use the Yukawa quark sector as a toy example
to demonstrate how both of those tasks can be accomplished with machine
learning techniques. We propose loss functions whose minimization results in
true models that are also beautiful as measured by three different criteria -
uniformity, sparsity, or symmetry.
Related papers
- Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Transformers are uninterpretable with myopic methods: a case study with
bounded Dyck grammars [36.780346257061495]
Interpretability methods aim to understand the algorithm implemented by a trained model.
We take a critical view of methods that exclusively focus on individual parts of the model.
arXiv Detail & Related papers (2023-12-03T15:34:46Z) - Seeking Truth and Beauty in Flavor Physics with Machine Learning [1.8434042562191815]
We design loss functions for performing both of those tasks with machine learning techniques.
We use the Yukawa quark sector as a toy example to demonstrate that the optimization of these loss functions results in true and beautiful models.
arXiv Detail & Related papers (2023-10-31T18:53:22Z) - Collapse Models: a theoretical, experimental and philosophical review [0.0]
We show that a clarification of the ontological intimations of collapse models is needed for at least three reasons.
We show that a clarification of the ontological intimations of collapse models is needed for at least three reasons.
arXiv Detail & Related papers (2023-10-23T14:13:41Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Minimal Value-Equivalent Partial Models for Scalable and Robust Planning
in Lifelong Reinforcement Learning [56.50123642237106]
Common practice in model-based reinforcement learning is to learn models that model every aspect of the agent's environment.
We argue that such models are not particularly well-suited for performing scalable and robust planning in lifelong reinforcement learning scenarios.
We propose new kinds of models that only model the relevant aspects of the environment, which we call "minimal value-minimal partial models"
arXiv Detail & Related papers (2023-01-24T16:40:01Z) - Active Discrimination Learning for Gaussian Process Models [0.27998963147546135]
The paper covers the design and analysis of experiments to discriminate between two Gaussian process models.
The selection relies on the maximisation of the difference between the symmetric symmetric Kullback Leibler divergences for the two models.
Other distance-based criteria, simpler to compute than previous ones, are also introduced.
arXiv Detail & Related papers (2022-11-21T16:27:50Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - A Scaling Law for Synthetic-to-Real Transfer: A Measure of Pre-Training [52.93808218720784]
Synthetic-to-real transfer learning is a framework in which we pre-train models with synthetically generated images and ground-truth annotations for real tasks.
Although synthetic images overcome the data scarcity issue, it remains unclear how the fine-tuning performance scales with pre-trained models.
We observe a simple and general scaling law that consistently describes learning curves in various tasks, models, and complexities of synthesized pre-training data.
arXiv Detail & Related papers (2021-08-25T02:29:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.