Non-interferometric rotational test of the Continuous Spontaneous Localisation model: enhancement of the collapse noise through shape optimisation
- URL: http://arxiv.org/abs/2402.13057v2
- Date: Tue, 18 Jun 2024 07:43:25 GMT
- Title: Non-interferometric rotational test of the Continuous Spontaneous Localisation model: enhancement of the collapse noise through shape optimisation
- Authors: Davide Giordano Ario Altamura, Matteo Carlesso, Sandro Donadi, Angelo Bassi,
- Abstract summary: We derive an upper bound on the parameters of the Continuous Spontaneous Localisation model by applying it to the rotational noise measured in a recent short-distance gravity experiment.
We find that despite being a table-top experiment the bound is only one order of magnitude weaker than that from LIGO for the relevant values of the collapse parameter.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Continuous Spontaneous Localisation (CSL) model is the most studied among collapse models, which describes the breakdown of the superposition principle for macroscopic systems. Here, we derive an upper bound on the parameters of the model by applying it to the rotational noise measured in a recent short-distance gravity experiment [Lee et al., Phys. Rev. Lett. 124, 101101 (2020)]. Specifically, considering the noise affecting the rotational motion, we found that despite being a table-top experiment the bound is only one order of magnitude weaker than that from LIGO for the relevant values of the collapse parameter. Further, we analyse possible ways to optimise the shape of the test mass to enhance the collapse noise by several orders of magnitude and eventually derive stronger bounds that can address the unexplored region of the CSL parameters space.
Related papers
- Analytical derivation and extension of the anti-Kibble-Zurek scaling in the transverse field Ising model [0.29465623430708904]
A defect density which quantifies the deviation from the spin ground state characterizes non-equilibrium dynamics during phase transitions.
The widely recognized Kibble-Zurek scaling predicts how the defect density evolves during phase transitions.
However, it can be perturbed by a noise, leading to the anti-Kibble-Zurek scaling.
arXiv Detail & Related papers (2024-04-26T08:41:21Z) - A Theoretical Analysis of Noise Geometry in Stochastic Gradient Descent [9.064667124987068]
Minibatch gradient descent (SGD) is a geometry phenomenon where noise aligns favorably with the geometry of local landscape.
We propose two metrics, derived from analyzing how noise influences the loss and subspace projection dynamics, to quantify the alignment strength.
arXiv Detail & Related papers (2023-10-01T14:58:20Z) - Posterior Coreset Construction with Kernelized Stein Discrepancy for
Model-Based Reinforcement Learning [78.30395044401321]
We develop a novel model-based approach to reinforcement learning (MBRL)
It relaxes the assumptions on the target transition model to belong to a generic family of mixture models.
It can achieve up-to 50 percent reduction in wall clock time in some continuous control environments.
arXiv Detail & Related papers (2022-06-02T17:27:49Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix
Recovery [8.581512812219737]
We propose a new framework for low-rank matrix recovery from observations corrupted with noise and outliers.
Inspired by the square root Lasso, this new formulation does not require prior knowledge of the noise level.
We show that a single, universal choice of the regularization parameter suffices to achieve reconstruction error proportional to the (a priori unknown) noise level.
arXiv Detail & Related papers (2021-06-17T02:28:11Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Testing collapse models with Bose-Einstein-Condensate interferometry [0.0]
We show that precision interferometry with Bose-Einstein condensed atoms can serve to lower the current empirical bound on the localization rate parameter.
In fact, the interplay between CSL-induced diffusion and dispersive atom-atom interactions results in an amplified sensitivity of the condensate to CSL.
arXiv Detail & Related papers (2020-08-31T13:00:58Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z) - Narrowing the parameter space of collapse models with ultracold layered
force sensors [0.0]
Spontaneous collapse models are one of the few testable solutions so far proposed.
Test mass is specifically designed to enhance the effect of CSL noise at the characteristic length $r_c=10-7$ m.
Results are explicitly challenging a well-motivated region of the CSL parameter space proposed by Adler.
arXiv Detail & Related papers (2020-02-22T22:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.