A call for frugal modelling: two case studies involving molecular spin
dynamics
- URL: http://arxiv.org/abs/2401.13618v2
- Date: Tue, 20 Feb 2024 12:19:22 GMT
- Title: A call for frugal modelling: two case studies involving molecular spin
dynamics
- Authors: Gerliz M. Guti\'errez-Finol, Aman Ullah, Alejandro Gaita-Ari\~no
- Abstract summary: We present and critically illustrate the principle of a frugal approach to modelling.
In both examples, the computationally expensive version of the model was the one that was published.
As a community, we still have a lot of room for improvement in this direction.
- Score: 51.799659599758996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As scientists living through a climate emergency, we have a responsibility to
lead by example, or to at least be consistent with our understanding of the
problem, which in the case of theoreticians involves a frugal approach to
modelling. Here we present and critically illustrate this principle. First, we
compare two models of very different level of sophistication which nevertheless
yield the same qualitative agreement with an experiment involving electric
manipulation of molecular spin qubits while presenting a difference in cost of
$>4$ orders of magnitude. As a second stage, an already minimalistic model
involving the use of single-ion magnets to implement a network of probabilistic
p-bits, programmed in two different programming languages, is shown to present
a difference in cost of a factor of $\simeq 50$. In both examples, the
computationally expensive version of the model was the one that was published.
As a community, we still have a lot of room for improvement in this direction.
Related papers
- Distribution learning via neural differential equations: a nonparametric
statistical perspective [1.4436965372953483]
This work establishes the first general statistical convergence analysis for distribution learning via ODE models trained through likelihood transformations.
We show that the latter can be quantified via the $C1$-metric entropy of the class $mathcal F$.
We then apply this general framework to the setting of $Ck$-smooth target densities, and establish nearly minimax-optimal convergence rates for two relevant velocity field classes $mathcal F$: $Ck$ functions and neural networks.
arXiv Detail & Related papers (2023-09-03T00:21:37Z) - Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games [66.2085181793014]
We show that a model-free stage-based Q-learning algorithm can enjoy the same optimality in the $H$ dependence as model-based algorithms.
Our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions.
arXiv Detail & Related papers (2023-08-17T08:34:58Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Collective dynamics Using Truncated Equations (CUT-E): simulating the
collective strong coupling regime with few-molecule models [0.0]
We exploit permutational symmetries to drastically reduce the computational cost of textitab-initio quantum dynamics simulations for large $N$.
We show that addition of $k$ extra effective molecules is enough to account for phenomena whose rates scale as $mathcalO(N-k)$.
arXiv Detail & Related papers (2022-09-11T22:49:29Z) - A single $T$-gate makes distribution learning hard [56.045224655472865]
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits.
We show that for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth $d=omega(log(n))$ Clifford circuits is hard.
arXiv Detail & Related papers (2022-07-07T08:04:15Z) - Calibration of Derivative Pricing Models: a Multi-Agent Reinforcement
Learning Perspective [3.626013617212667]
One of the most fundamental questions in quantitative finance is the existence of continuous-time diffusion models that fit market prices of a given set of options.
Our contribution is to show how a suitable game theoretical formulation of this problem can help solve this question by leveraging existing developments in modern deep multi-agent reinforcement learning.
arXiv Detail & Related papers (2022-03-14T05:34:00Z) - Compressed particle methods for expensive models with application in
Astronomy and Remote Sensing [15.874578163779047]
We introduce a novel approach where the expensive model is evaluated only in some well-chosen samples.
We provide theoretical results supporting the novel algorithms and give empirical evidence of the performance of the proposed method in several numerical experiments.
Two of them are real-world applications in astronomy and satellite remote sensing.
arXiv Detail & Related papers (2021-07-18T14:45:23Z) - Exploring Sparse Expert Models and Beyond [51.90860155810848]
Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost.
We propose a simple method called expert prototyping that splits experts into different prototypes and applies $k$ top-$1$ routing.
This strategy improves the model quality but maintains constant computational costs, and our further exploration on extremely large-scale models reflects that it is more effective in training larger models.
arXiv Detail & Related papers (2021-05-31T16:12:44Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.