Coherence filtration under strictly incoherent operations
- URL: http://arxiv.org/abs/2305.15741v2
- Date: Fri, 2 Jun 2023 02:29:22 GMT
- Title: Coherence filtration under strictly incoherent operations
- Authors: C. L. Liu and C. P. Sun
- Abstract summary: The aim of this task is to transform a given state $rho$ into another one $rhoprime$ whose fidelity with the maximally coherent state is maximal by using strictly incoherent operations.
We find that the maximal fidelity between $rhoprime$ and the maximally coherent state is given by a multiple of the $Delta$ of coherence $R(rho|Deltarho):=minuplambda|rholequplambdaDeltarho$, which provides $R(rho|Deltarho
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study the task of coherence filtration under strictly incoherent
operations in this paper. The aim of this task is to transform a given state
$\rho$ into another one $\rho^\prime$ whose fidelity with the maximally
coherent state is maximal by using stochastic strictly incoherent operations.
We find that the maximal fidelity between $\rho^\prime$ and the maximally
coherent state is given by a multiple of the $\Delta$ robustness of coherence
$R(\rho\|\Delta\rho):=\min\{\uplambda|\rho\leq\uplambda\Delta\rho\}$, which
provides $R(\rho\|\Delta\rho)$ an operational interpretation. Finally, we
provide a coherence measure based on the task of coherence filtration.
Related papers
- Control of the von Neumann Entropy for an Open Two-Qubit System Using Coherent and Incoherent Drives [50.24983453990065]
This article is devoted to developing an approach for manipulating the von Neumann entropy $S(rho(t))$ of an open two-qubit system with coherent control and incoherent control inducing time-dependent decoherence rates.
The following goals are considered: (a) minimizing or maximizing the final entropy $S(rho(T))$; (b) steering $S(rho(T))$ to a given target value; (c) steering $S(rho(T))$ to a target value and satisfying the pointwise state constraint $S(
arXiv Detail & Related papers (2024-05-10T10:01:10Z) - On the $O(\frac{\sqrt{d}}{T^{1/4}})$ Convergence Rate of RMSProp and Its Momentum Extension Measured by $\ell_1$ Norm [59.65871549878937]
This paper considers the RMSProp and its momentum extension and establishes the convergence rate of $frac1Tsum_k=1T.
Our convergence rate matches the lower bound with respect to all the coefficients except the dimension $d$.
Our convergence rate can be considered to be analogous to the $frac1Tsum_k=1T.
arXiv Detail & Related papers (2024-02-01T07:21:32Z) - Best-of-Both-Worlds Algorithms for Linear Contextual Bandits [11.94312915280916]
We study best-of-both-worlds algorithms for $K$-armed linear contextual bandits.
Our algorithms deliver near-optimal regret bounds in both the adversarial and adversarial regimes.
arXiv Detail & Related papers (2023-12-24T08:27:30Z) - Cooperative Multi-Agent Reinforcement Learning: Asynchronous
Communication and Linear Function Approximation [77.09836892653176]
We study multi-agent reinforcement learning in the setting of episodic Markov decision processes.
We propose a provably efficient algorithm based on value that enable asynchronous communication.
We show that a minimal $Omega(dM)$ communication complexity is required to improve the performance through collaboration.
arXiv Detail & Related papers (2023-05-10T20:29:29Z) - Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR [58.40575099910538]
We study risk-sensitive Reinforcement Learning (RL), focusing on the objective of Conditional Value at Risk (CVaR) with risk tolerance $tau$.
We show the minimax CVaR regret rate is $Omega(sqrttau-1AK)$, where $A$ is the number of actions and $K$ is the number of episodes.
We show that our algorithm achieves the optimal regret of $widetilde O(tau-1sqrtSAK)$ under a continuity assumption and in general attains a near
arXiv Detail & Related papers (2023-02-07T02:22:31Z) - Group-invariant max filtering [4.396860522241306]
We construct a family of $G$-invariant real-valued functions on $V$ that we call max filters.
In the case where $V=mathbbRd$ and $G$ is finite, a suitable max filter bank separates orbits, and is even bilipschitz in the quotient metric.
arXiv Detail & Related papers (2022-05-27T15:18:08Z) - Building Kohn-Sham potentials for ground and excited states [0.0]
We show that given $k$ and a target density $rho$, there exist potentials having $ktextth$ bound mixed states which densities are arbitrarily close to $rho$.
We present an inversion algorithm taking into account degeneracies, removing the generic blocking behavior of standard ones.
arXiv Detail & Related papers (2021-01-04T17:47:08Z) - Examining the validity of Schatten-$p$-norm-based functionals as
coherence measures [0.0]
It has been asked whether the two classes of Schatten-$p$-norm-based functionals $C_p(rho)=min_sigmainmathcalI||rho-sigma_p$ and $ tildeC_p(rho)= |rho-Deltarho|_p$ with $pgeq 1$ are valid coherence measures under incoherent operations, strictly incoherent operations, and genuinely incoherent operations.
We prove that
arXiv Detail & Related papers (2020-09-13T01:42:00Z) - Spectral density estimation with the Gaussian Integral Transform [91.3755431537592]
spectral density operator $hatrho(omega)=delta(omega-hatH)$ plays a central role in linear response theory.
We describe a near optimal quantum algorithm providing an approximation to the spectral density.
arXiv Detail & Related papers (2020-04-10T03:14:38Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.