Control of the von Neumann Entropy for an Open Two-Qubit System Using Coherent and Incoherent Drives
- URL: http://arxiv.org/abs/2405.06365v1
- Date: Fri, 10 May 2024 10:01:10 GMT
- Title: Control of the von Neumann Entropy for an Open Two-Qubit System Using Coherent and Incoherent Drives
- Authors: Oleg Morzhin, Alexander Pechen,
- Abstract summary: This article is devoted to developing an approach for manipulating the von Neumann entropy $S(rho(t))$ of an open two-qubit system with coherent control and incoherent control inducing time-dependent decoherence rates.
The following goals are considered: (a) minimizing or maximizing the final entropy $S(rho(T))$; (b) steering $S(rho(T))$ to a given target value; (c) steering $S(rho(T))$ to a target value and satisfying the pointwise state constraint $S(
- Score: 50.24983453990065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article is devoted to developing an approach for manipulating the von Neumann entropy $S(\rho(t))$ of an open two-qubit system with coherent control and incoherent control inducing time-dependent decoherence rates. The following goals are considered: (a) minimizing or maximizing the final entropy $S(\rho(T))$; (b) steering $S(\rho(T))$ to a given target value; (c) steering $S(\rho(T))$ to a target value and satisfying the pointwise state constraint $S(\rho(t)) \leq \overline{S}$ for a given $\overline{S}$; (d) keeping $S(\rho(t))$ constant at a given time interval. Under the Markovian dynamics determined by a Gorini--Kossakowski--Sudarshan--Lindblad type master equation, which contains coherent and incoherent controls, one- and two-step gradient projection methods and genetic algorithm have been adapted, taking into account the specifics of the objective functionals. The corresponding numerical results are provided and discussed.
Related papers
- Semi-Discrete Optimal Transport: Nearly Minimax Estimation With Stochastic Gradient Descent and Adaptive Entropic Regularization [38.67914746910537]
We prove an $mathcalO(t-1)$ lower bound rate for the OT map, using the similarity between Laguerre cells estimation and density support estimation.
To nearly achieve the desired fast rate, we design an entropic regularization scheme decreasing with the number of samples.
arXiv Detail & Related papers (2024-05-23T11:46:03Z) - Optimization of Time-Dependent Decoherence Rates and Coherent Control
for a Qutrit System [77.34726150561087]
Incoherent control makes the decoherence rates depending on time in a specific controlled manner.
We consider the problem of maximizing the Hilbert-Schmidt overlap between the system's final state $rho(T)$ and a given target state $rho_rm target.
arXiv Detail & Related papers (2023-08-08T01:28:50Z) - Unitarity estimation for quantum channels [7.323367190336826]
Unitarity estimation is a basic and important problem in quantum device certification and benchmarking.
We provide a unified framework for unitarity estimation, which induces ancilla-efficient algorithms.
We show that both the $d$-dependence and $epsilon$-dependence of our algorithms are optimal.
arXiv Detail & Related papers (2022-12-19T09:36:33Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Optimal and instance-dependent guarantees for Markovian linear stochastic approximation [47.912511426974376]
We show a non-asymptotic bound of the order $t_mathrmmix tfracdn$ on the squared error of the last iterate of a standard scheme.
We derive corollaries of these results for policy evaluation with Markov noise.
arXiv Detail & Related papers (2021-12-23T18:47:50Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Naive Exploration is Optimal for Online LQR [49.681825576239355]
We show that the optimal regret scales as $widetildeTheta(sqrtd_mathbfu2 d_mathbfx T)$, where $T$ is the number of time steps, $d_mathbfu$ is the dimension of the input space, and $d_mathbfx$ is the dimension of the system state.
Our lower bounds rule out the possibility of a $mathrmpoly(logT)$-regret algorithm, which had been
arXiv Detail & Related papers (2020-01-27T03:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.