Reconfigurable unitary transformations of optical beam arrays
- URL: http://arxiv.org/abs/2407.06981v1
- Date: Tue, 9 Jul 2024 15:56:35 GMT
- Title: Reconfigurable unitary transformations of optical beam arrays
- Authors: Aldo C. Martinez-Becerril, Siwei Luo, Liu Li, Jordan Pagé, Lambert Giner, Raphael A. Abrahao, Jeff S. Lundeen,
- Abstract summary: We demonstrate the promise of an arbitrary unitary transformation that can be reconfigured dynamically.
We experimentally test the full gamut of unitary transformations for a system of two parallel beams and make a map of their fidelity.
This high-fidelity suggests MPLCs are a useful tool implementing the unitary transformations that comprise quantum and classical information processing.
- Score: 1.6820991036487616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatial transformations of light are ubiquitous in optics, with examples ranging from simple imaging with a lens to quantum and classical information processing in waveguide meshes. Multi-plane light converter (MPLC) systems have emerged as a platform that promises completely general spatial transformations, i.e., a universal unitary. However until now, MPLC systems have demonstrated transformations that are far from general, e.g., converting from a Gaussian to Laguerre-Gauss mode. Here, we demonstrate the promise of an MLPC, the ability to impose an arbitrary unitary transformation that can be reconfigured dynamically. Specifically, we consider transformations on superpositions of parallel free-space beams arranged in an array, which is a common information encoding in photonics. We experimentally test the full gamut of unitary transformations for a system of two parallel beams and make a map of their fidelity. We obtain an average transformation fidelity of $0.85 \pm 0.03$. This high-fidelity suggests MPLCs are a useful tool implementing the unitary transformations that comprise quantum and classical information processing.
Related papers
- A Variational Approach to Learning Photonic Unitary Operators [0.0]
We harness the high dimensional nature of structured light modulated in the transverse spatial degree of freedom to learn unitary operations.
Our work advances high dimensional information processing and can be adapted to both process and quantum state tomography of unknown states and channels.
arXiv Detail & Related papers (2024-06-09T10:36:27Z) - How Do Transformers Learn In-Context Beyond Simple Functions? A Case
Study on Learning with Representations [98.7450564309923]
This paper takes initial steps on understanding in-context learning (ICL) in more complex scenarios, by studying learning with representations.
We construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but fixed representation function.
We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size.
arXiv Detail & Related papers (2023-10-16T17:40:49Z) - Efficient Quantum Algorithm for All Quantum Wavelet Transforms [0.08968838300743379]
We develop a simple yet efficient quantum algorithm for executing any wavelet transform on a quantum computer.
Our proposed quantum wavelet transforms could be used in quantum computing algorithms in a similar manner to their well-established counterpart, the quantum Fourier transform.
arXiv Detail & Related papers (2023-09-17T19:02:08Z) - Universal Unitary Photonic Circuits by Interlacing Discrete Fractional
Fourier Transform and Phase Modulation [0.0]
We introduce a novel parameterization of complex unitary matrices, which allows for the efficient implementation of arbitrary linear discrete unitary operators.
We show that such a configuration can represent arbitrary unitary operators with $N+1$ phase layers.
We propose an integrated photonic circuit realization of this architecture with coupled waveguide arrays and reconfigurable phase modulators.
arXiv Detail & Related papers (2023-07-14T00:23:14Z) - B-cos Alignment for Inherently Interpretable CNNs and Vision
Transformers [97.75725574963197]
We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training.
We show that a sequence of such transformations induces a single linear transformation that faithfully summarises the full model computations.
We show that the resulting explanations are of high visual quality and perform well under quantitative interpretability metrics.
arXiv Detail & Related papers (2023-06-19T12:54:28Z) - ParGAN: Learning Real Parametrizable Transformations [50.51405390150066]
We propose ParGAN, a generalization of the cycle-consistent GAN framework to learn image transformations.
The proposed generator takes as input both an image and a parametrization of the transformation.
We show how, with disjoint image domains with no annotated parametrization, our framework can create smooths as well as learn multiple transformations simultaneously.
arXiv Detail & Related papers (2022-11-09T16:16:06Z) - Illumination Adaptive Transformer [66.50045722358503]
We propose a lightweight fast Illumination Adaptive Transformer (IAT)
IAT decomposes the light transformation pipeline into local and global ISP components.
We have extensively evaluated IAT on multiple real-world datasets.
arXiv Detail & Related papers (2022-05-30T06:21:52Z) - Towards Lightweight Transformer via Group-wise Transformation for
Vision-and-Language Tasks [126.33843752332139]
We introduce Group-wise Transformation towards a universal yet lightweight Transformer for vision-and-language tasks, termed as LW-Transformer.
We apply LW-Transformer to a set of Transformer-based networks, and quantitatively measure them on three vision-and-language tasks and six benchmark datasets.
Experimental results show that while saving a large number of parameters and computations, LW-Transformer achieves very competitive performance against the original Transformer networks for vision-and-language tasks.
arXiv Detail & Related papers (2022-04-16T11:30:26Z) - Processing entangled photons in high dimensions with a programmable
light converter [0.0]
A programmable processor of entangled states is crucial for the certification, manipulation and distribution of high-dimensional entanglement.
Here, we demonstrate a reconfigurable processor of entangled photons in high-dimensions based on multi-plane light conversion (MPLC)
We certify three-dimensional entanglement in two mutually unbiased bases, perform 400 arbitrary random transformations on entangled photons, and convert the mode basis of entangled photons for entanglement distribution.
arXiv Detail & Related papers (2021-08-04T19:40:55Z) - High-dimensional quantum Fourier transform of twisted light [0.0]
An implementation scheme of the $d$-dimensional Fourier transform acting on single photons is known that uses the path encoding.
We present an alternative design that uses the orbital angular momentum as a carrier of information and needs only $O(sqrtdlog d)$ elements.
arXiv Detail & Related papers (2021-01-28T10:44:46Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.