Lineshape Optimization in Inhomogeneous $Λ$-type Quantum Memory
- URL: http://arxiv.org/abs/2405.14013v1
- Date: Wed, 22 May 2024 21:43:15 GMT
- Title: Lineshape Optimization in Inhomogeneous $Λ$-type Quantum Memory
- Authors: Kai Shinbrough, Donny R. Pearson Jr., Virginia O. Lorenz, Elizabeth A. Goldschmidt,
- Abstract summary: Photonic quantum memory is a crucial elementary operation in photonic quantum information processing.
We focus on inhomogeneously broadened ensembles of $Lambda$-type quantum emitters, which have long coherence lifetimes and broad bandwidth compatibility.
We investigate the properties of electromagnetically induced transparency (EIT) for a survey of inhomogeneous lineshapes that are straightforward to realize experimentally.
We compare the optimal EIT efficiency to the well-known atomic frequency comb (AFC) protocol, which also relies on spectral shaping of the inhomogeneous broadening.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Photonic quantum memory is a crucial elementary operation in photonic quantum information processing. While many physically distinct memory protocols and hardware implementations have been applied to this task, the development of a quantum memory performant in all relevant metrics simultaneously (e.g., efficiency, bandwidth, lifetime, etc.) is still an open challenge. In this work, we focus on inhomogeneously broadened ensembles of $\Lambda$-type quantum emitters, which have long coherence lifetimes and broad bandwidth compatibility, but tend to exhibit low efficiency, in part due to technical constraints on medium growth and preparation, and in part due to inefficient use of a key resource in these systems: the inhomogeneously broadened excited state lineshape. We investigate the properties of electromagnetically induced transparency (EIT) for a survey of inhomogeneous lineshapes that are straightforward to realize experimentally, and optimize the memory efficiency for each lineshape over a large range of experimental parameters. We compare the optimal EIT efficiency to the well-known atomic frequency comb (AFC) protocol, which also relies on spectral shaping of the inhomogeneous broadening, and observe that with sufficient control field power the optimized lineshapes allow more efficient storage. Finally, we optimize over the inhomogeneous lineshape in a protocol agnostic fashion by numerically constructing the linear integral kernel describing the memory interaction and using a singular value decomposition and interpolation procedure to ensure optimality of the resulting lineshape.
Related papers
- Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores [3.6385567224218556]
Large language models (LLMs) have been widely applied but face challenges in efficient inference.
We introduce a novel bipolar-INT data format that facilitates parallel computing and supports symmetric quantization.
We implement an arbitrary precision matrix multiplication scheme that decomposes and recovers at the bit level, enabling flexible precision.
arXiv Detail & Related papers (2024-09-26T14:17:58Z) - Leveraging SPD Matrices on Riemannian Manifolds in Quantum Classical Hybrid Models for Structural Health Monitoring [0.0]
Realtime finite element modeling of bridges assists modern structural health monitoring systems by providing comprehensive insights into structural integrity.
FEM computational cost and the need for realtime analysis pose significant challenges.
In this study, we propose a novel hybrid quantum classical Multilayer Perceptron pipeline.
arXiv Detail & Related papers (2024-06-06T13:21:28Z) - Efficient Quantum Circuits for Non-Unitary and Unitary Diagonal Operators with Space-Time-Accuracy trade-offs [1.0749601922718608]
Unitary and non-unitary diagonal operators are fundamental building blocks in quantum algorithms.
We introduce a general approach to implement unitary and non-unitary diagonal operators with efficient-adjustable-depth quantum circuits.
arXiv Detail & Related papers (2024-04-03T15:42:25Z) - Convergence and scaling of Boolean-weight optimization for hardware
reservoirs [0.0]
We analytically derive the scaling laws for highly efficient Coordinate Descent applied to optimize the readout layer of a random recurrently connection neural network.
Our results perfectly reproduce the convergence and scaling of a large-scale photonic reservoir implemented in a proof-of-concept experiment.
arXiv Detail & Related papers (2023-05-13T12:15:25Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention [53.02648818164273]
We present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA)
DBA compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity.
Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-24T03:06:36Z) - Adaptive Fourier Neural Operators: Efficient Token Mixers for
Transformers [55.90468016961356]
We propose an efficient token mixer that learns to mix in the Fourier domain.
AFNO is based on a principled foundation of operator learning.
It can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.
arXiv Detail & Related papers (2021-11-24T05:44:31Z) - Variational Quantum Optimization with Multi-Basis Encodings [62.72309460291971]
We introduce a new variational quantum algorithm that benefits from two innovations: multi-basis graph complexity and nonlinear activation functions.
Our results in increased optimization performance, two increase in effective landscapes and a reduction in measurement progress.
arXiv Detail & Related papers (2021-06-24T20:16:02Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Optimization of Broadband $\Lambda$-type Quantum Memory Using Gaussian
Pulses [0.7734726150561088]
We show that for overlapping signal and control fields there exists a unique and broadband pulse duration that optimize the memory efficiency.
We further optimize over the control field temporal delay and pulse duration, demonstrating saturation of this efficiency bound over a broad range of pulse durations.
arXiv Detail & Related papers (2020-08-31T14:19:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.