Extraction of Work via a Thermalization Protocol
- URL: http://arxiv.org/abs/2309.04187v1
- Date: Fri, 8 Sep 2023 08:01:40 GMT
- Title: Extraction of Work via a Thermalization Protocol
- Authors: Nicol\`o Piccione, Benedetto Militello, Anna Napoli, Bruno Bellomo
- Abstract summary: We show that it is possible to exploit a thermalization process to extract work from a resource system $R$ to a bipartite system $S$.
We find the theoretical bounds of the protocol in the general case and show that when applied to the Rabi model it gives rise to a satisfactory extraction of work and efficiency.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This extended abstract contains an outline of the work reported at the
conference IQIS2018. We show that it is possible to exploit a thermalization
process to extract work from a resource system $R$ to a bipartite system $S$.
To do this, we propose a simple protocol in a general setting in the presence
of a single bath at temperature $T$ and then examine it when $S$ is described
by the quantum Rabi model at $T=0$. We find the theoretical bounds of the
protocol in the general case and we show that when applied to the Rabi model it
gives rise to a satisfactory extraction of work and efficiency.
Related papers
- Work Statistics via Real-Time Effective Field Theory: Application to Work Extraction from Thermal Bath with Qubit Coupling [0.023020018305241332]
We study the possible work extraction via coupling the thermal bath to a qubit of either spin, fermion, or topological types.
The amount of work extraction is derived from the work statistics under a cyclic nonequilibrium process.
arXiv Detail & Related papers (2025-02-26T04:41:41Z) - Diffusion at Absolute Zero: Langevin Sampling Using Successive Moreau Envelopes [conference paper] [52.69179872700035]
We propose a novel method for sampling from Gibbs distributions of the form $pi(x)proptoexp(-U(x))$ with a potential $U(x)$.
Inspired by diffusion models we propose to consider a sequence $(pit_k)_k$ of approximations of the target density, for which $pit_kapprox pi$ for $k$ small and, on the other hand, $pit_k$ exhibits favorable properties for sampling for $k$ large.
arXiv Detail & Related papers (2025-02-03T13:50:57Z) - Zero-Fluctuation Quantum Work Extraction [0.7252027234425332]
We study the possibility of deterministic protocols for extracting work from quantum systems.
We prove that, with enough copies of the system, such zero-fluctuation protocols always exist if the Hamiltonian has a rational spectrum.
arXiv Detail & Related papers (2024-02-26T19:01:42Z) - Collective advantages in finite-time thermodynamics [0.0]
We show that $W_rm disspropto Nx$ can be dramatically reduced by considering collective protocols in which interactions are suitably created along the protocol.
As an application of these results, we focus on the erasure of information in finite time and prove a faster convergence to Landauer's bound.
arXiv Detail & Related papers (2023-06-28T20:10:02Z) - Cooling and work extraction under memory-assisted Markovian thermal
processes [0.0]
We investigate the limits on cooling and work extraction via Markovian thermal processes assisted by a finite-dimensional memory.
For cooling a qubit system, we consider two paradigms: cooling under coherent control and cooling under incoherent control.
For the task of work extraction, we prove that when the target system is a qubit in the excited state the minimum extraction error achieved by TP can be approximated by Markovian thermal processes assisted by a large enough memory.
arXiv Detail & Related papers (2023-06-12T05:59:31Z) - Fast Rates for Maximum Entropy Exploration [52.946307632704645]
We address the challenge of exploration in reinforcement learning (RL) when the agent operates in an unknown environment with sparse or no rewards.
We study the maximum entropy exploration problem two different types.
For visitation entropy, we propose a game-theoretic algorithm that has $widetildemathcalO(H3S2A/varepsilon2)$ sample complexity.
For the trajectory entropy, we propose a simple algorithm that has a sample of complexity of order $widetildemathcalO(mathrmpoly(S,
arXiv Detail & Related papers (2023-03-14T16:51:14Z) - Controlled remote implementation of operations via graph states [7.238541917115604]
We propose protocols for controlled remote implementation of operations with convincing control power.
Sharing a $(2N+1)$-partite graph state, $2N$ participants collaborate to prepare the stator and realize the operation.
We show that the control power of our protocol is reliable by positive operator valued measurement.
arXiv Detail & Related papers (2022-10-26T12:50:14Z) - Reward-Free Model-Based Reinforcement Learning with Linear Function
Approximation [92.99933928528797]
We study the model-based reward-free reinforcement learning with linear function approximation for episodic Markov decision processes (MDPs)
In the planning phase, the agent is given a specific reward function and uses samples collected from the exploration phase to learn a good policy.
We show that to obtain an $epsilon$-optimal policy for arbitrary reward function, UCRL-RFE needs to sample at most $tilde O(H4d(H + d)epsilon-2)$ episodes.
arXiv Detail & Related papers (2021-10-12T23:03:58Z) - Quantum double aspects of surface code models [77.34726150561087]
We revisit the Kitaev model for fault tolerant quantum computing on a square lattice with underlying quantum double $D(G)$ symmetry.
We show how our constructions generalise to $D(H)$ models based on a finite-dimensional Hopf algebra $H$.
arXiv Detail & Related papers (2021-06-25T17:03:38Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep
Multi-Agent Reinforcement Learning [66.94149388181343]
We present a new version of a popular $Q$-learning algorithm for MARL.
We show that it can recover the optimal policy even with access to $Q*$.
We also demonstrate improved performance on predator-prey and challenging multi-agent StarCraft benchmark tasks.
arXiv Detail & Related papers (2020-06-18T18:34:50Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Simple scheme for extracting work with a single bath [0.0]
The protocol is based on a recent work definition involving only a single bath.
We quantify both the extracted work and the ideal efficiency of the process also giving maximum bounds for them.
Our proposal makes use of simple operations not needing fine control.
arXiv Detail & Related papers (2018-06-29T12:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.