Monotones in Resource Theories for Dynamical Decoupling
- URL: http://arxiv.org/abs/2412.11595v1
- Date: Mon, 16 Dec 2024 09:33:35 GMT
- Title: Monotones in Resource Theories for Dynamical Decoupling
- Authors: Graeme D. Berk, Simon Milz, Kavan Modi,
- Abstract summary: We show modified relative entropy-based resource quantifiers, prove that they are indeed monotonic in our resource theories.
DD can be understood as temporal resource distillation, and improvements to noise reduction via our multitimescale optimal dynamical decoupling (MODD) method coincide with a decrease in the corresponding non-Markovianity monotone.
- Score: 0.0
- License:
- Abstract: In arXiv:2110.02613, we presented a generalised dynamical resource theory framework that enabled noise reduction techniques including dynamical decoupling (DD) to be studied. While this fundamental contribution remains correct, it has been found that the main resource quantifiers we employed to study these resource theories -- based on the relative entropies between Choi states of multitime processes -- are not monotonic under the allowed transformations. In this letter we detail modified relative entropy-based resource quantifiers, prove that they are indeed monotonic in our resource theories. We re-interpret our numerical results in terms of these new relative entropy monotones, arriving at the same empirical conclusions: DD can be understood as temporal resource distillation, and improvements to noise reduction via our multitimescale optimal dynamical decoupling (MODD) method coincide with a decrease in the corresponding non-Markovianity monotone.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Provably Efficient Algorithm for Nonstationary Low-Rank MDPs [48.92657638730582]
We make the first effort to investigate nonstationary RL under episodic low-rank MDPs, where both transition kernels and rewards may vary over time.
We propose a parameter-dependent policy optimization algorithm called PORTAL, and further improve PORTAL to its parameter-free version of Ada-PORTAL.
For both algorithms, we provide upper bounds on the average dynamic suboptimality gap, which show that as long as the nonstationarity is not significantly large, PORTAL and Ada-PORTAL are sample-efficient and can achieve arbitrarily small average dynamic suboptimality gap with sample complexity.
arXiv Detail & Related papers (2023-08-10T09:52:44Z) - Stochastic Modified Equations and Dynamics of Dropout Algorithm [4.811269936680572]
Dropout is a widely utilized regularization technique in the training of neural networks.
Its underlying mechanism and its impact on achieving good abilities remain poorly understood.
arXiv Detail & Related papers (2023-05-25T08:42:25Z) - Towards Realistic Low-resource Relation Extraction: A Benchmark with
Empirical Baseline Study [51.33182775762785]
This paper presents an empirical study to build relation extraction systems in low-resource settings.
We investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; and (iii) data augmentation technologies and self-training to generate more labeled in-domain data.
arXiv Detail & Related papers (2022-10-19T15:46:37Z) - Spatio-temporally separable non-linear latent factor learning: an
application to somatomotor cortex fMRI data [0.0]
Models of fMRI data that can perform whole-brain discovery of latent factors are understudied.
New methods for efficient spatial weight-sharing are critical to deal with the high dimensionality of the data and the presence of noise.
Our approach is evaluated on data with multiple motor sub-tasks to assess whether the model captures disentangled latent factors that correspond to each sub-task.
arXiv Detail & Related papers (2022-05-26T21:30:22Z) - Quantum Dynamical Resource Theory under Resource Non-increasing
Framework [0.0]
We show that maximally incoherent operations (MIO) and incoherent operations (IO) in the static coherence resource theory are free in the sense of dynamical coherence.
We also present convenient measures and give the analytic calculation for the amplitude damping channel.
arXiv Detail & Related papers (2022-03-13T04:19:01Z) - Heavy-tailed denoising score matching [5.371337604556311]
We develop an iterative noise scaling algorithm to consistently initialise the multiple levels of noise in Langevin dynamics.
On the practical side, our use of heavy-tailed DSM leads to improved score estimation, controllable sampling convergence, and more balanced unconditional generative performance for imbalanced datasets.
arXiv Detail & Related papers (2021-12-17T22:04:55Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.