Projective characterization of higher-order quantum transformations
- URL: http://arxiv.org/abs/2206.06206v3
- Date: Fri, 13 Dec 2024 18:25:34 GMT
- Title: Projective characterization of higher-order quantum transformations
- Authors: Timothée Hoffreumon, Ognyan Oreshkov,
- Abstract summary: This work presents a framework for characterizing higher-order quantum transformations using superoperator projectors.
The main novelty of this work is the introduction in the algebra of the 'prec' connector.
This allows to assess the possible signaling structure of any maps characterized within the projective framework.
- Score: 0.0
- License:
- Abstract: Transformations of transformations, also called higher-order transformations, is a natural concept in information processing, which has recently attracted significant interest in the study of quantum causal relations. In this work, a framework for characterizing higher-order quantum transformations which relies on the use of superoperator projectors is presented. More precisely, working with projectors in the Choi-Jamiolkowski picture is shown to provide a handy way of defining the characterization constraints on any class of higher-order transformations. The algebraic properties of these projectors are furthermore identified as a model of multiplicative additive linear logic (MALL). The main novelty of this work is the introduction in the algebra of the 'prec' connector. It is used for the characterization of maps that are no signaling from input to output or the other way around. This allows to assess the possible signaling structure of any maps characterized within the projective framework. The properties of the prec are moreover shown to yield a normal form for projective expressions. This provides a systematic way to compare different classes of higher-order transformations.
Related papers
- Token Statistics Transformer: Linear-Time Attention via Variational Rate Reduction [29.12836710966048]
We propose a novel transformer attention operator whose computational complexity scales linearly with the number of tokens.
Our results call into question the conventional wisdom that pairwise similarity style attention mechanisms are critical to the success of transformer architectures.
arXiv Detail & Related papers (2024-12-23T18:59:21Z) - DAPE V2: Process Attention Score as Feature Map for Length Extrapolation [63.87956583202729]
We conceptualize attention as a feature map and apply the convolution operator to mimic the processing methods in computer vision.
The novel insight, which can be adapted to various attention-related models, reveals that the current Transformer architecture has the potential for further evolution.
arXiv Detail & Related papers (2024-10-07T07:21:49Z) - Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers [54.20763128054692]
We study how a two-attention-layer transformer is trained to perform ICL on $n$-gram Markov chain data.
We prove that the gradient flow with respect to a cross-entropy ICL loss converges to a limiting model.
arXiv Detail & Related papers (2024-09-09T18:10:26Z) - Optimal Matrix-Mimetic Tensor Algebras via Variable Projection [0.0]
Matrix mimeticity arises from interpreting tensors as operators that can be multiplied, factorized, and analyzed analogous to matrices.
We learn optimal linear mappings and corresponding tensor representations without relying on prior knowledge of the data.
We provide original theory of uniqueness of the transformation and convergence analysis of our variable-projection-based algorithm.
arXiv Detail & Related papers (2024-06-11T04:52:23Z) - EulerFormer: Sequential User Behavior Modeling with Complex Vector Attention [88.45459681677369]
We propose a novel transformer variant with complex vector attention, named EulerFormer.
It provides a unified theoretical framework to formulate both semantic difference and positional difference.
It is more robust to semantic variations and possesses moresuperior theoretical properties in principle.
arXiv Detail & Related papers (2024-03-26T14:18:43Z) - Quantum linear algebra is all you need for Transformer architectures [1.660288273261283]
We investigate transformer architectures under the lens of fault-tolerant quantum computing.
We show how to prepare a block encoding of the self-attention matrix, with a new subroutine for the row-wise application of the softmax function.
Our subroutines prepare an amplitude encoding of the transformer output, which can be measured to obtain a prediction.
arXiv Detail & Related papers (2024-02-26T16:31:28Z) - B-cos Alignment for Inherently Interpretable CNNs and Vision
Transformers [97.75725574963197]
We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training.
We show that a sequence of such transformations induces a single linear transformation that faithfully summarises the full model computations.
We show that the resulting explanations are of high visual quality and perform well under quantitative interpretability metrics.
arXiv Detail & Related papers (2023-06-19T12:54:28Z) - The tilted CHSH games: an operator algebraic classification [77.34726150561087]
This article introduces a general systematic procedure for solving any binary-input binary-output game.
We then illustrate on the prominent class of tilted CHSH games.
We derive for those an entire characterisation on the region exhibiting some quantum advantage.
arXiv Detail & Related papers (2023-02-16T18:33:59Z) - Fourier-based quantum signal processing [0.0]
Implementing general functions of operators is a powerful tool in quantum computation.
Quantum signal processing is the state of the art for this aim.
We present an algorithm for Hermitian-operator function design from an oracle given by the unitary evolution.
arXiv Detail & Related papers (2022-06-06T18:02:30Z) - Recursive Binding for Similarity-Preserving Hypervector Representations
of Sequences [4.65149292714414]
A critical step for designing the HDC/VSA solutions is to obtain such representations from the input data.
Here, we propose their transformation to distributed representations that both preserve the similarity of identical sequence elements at nearby positions and are equivariant to the sequence shift.
The proposed transformation was experimentally investigated with symbolic strings used for modeling human perception of word similarity.
arXiv Detail & Related papers (2022-01-27T17:41:28Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.