On Machine Learning Knowledge Representation In The Form Of Partially
Unitary Operator. Knowledge Generalizing Operator
- URL: http://arxiv.org/abs/2212.14810v1
- Date: Thu, 22 Dec 2022 06:29:27 GMT
- Title: On Machine Learning Knowledge Representation In The Form Of Partially
Unitary Operator. Knowledge Generalizing Operator
- Authors: Vladislav Gennadievich Malyshkin
- Abstract summary: A new form of ML knowledge representation with high generalization power is developed and implemented numerically.
$mathcalU$ can be considered as a $mathitIN$ to $mathitOUT$ quantum channel.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A new form of ML knowledge representation with high generalization power is
developed and implemented numerically. Initial $\mathit{IN}$ attributes and
$\mathit{OUT}$ class label are transformed into the corresponding Hilbert
spaces by considering localized wavefunctions. A partially unitary operator
optimally converting a state from $\mathit{IN}$ Hilbert space into
$\mathit{OUT}$ Hilbert space is then built from an optimization problem of
transferring maximal possible probability from $\mathit{IN}$ to $\mathit{OUT}$,
this leads to the formulation of a new algebraic problem. Constructed Knowledge
Generalizing Operator $\mathcal{U}$ can be considered as a $\mathit{IN}$ to
$\mathit{OUT}$ quantum channel; it is a partially unitary rectangular matrix of
the dimension $\mathrm{dim}(\mathit{OUT}) \times \mathrm{dim}(\mathit{IN})$
transforming operators as $A^{\mathit{OUT}}=\mathcal{U} A^{\mathit{IN}}
\mathcal{U}^{\dagger}$. Whereas only operator $\mathcal{U}$ projections squared
are observable
$\left\langle\mathit{OUT}|\mathcal{U}|\mathit{IN}\right\rangle^2$
(probabilities), the fundamental equation is formulated for the operator
$\mathcal{U}$ itself. This is the reason of high generalizing power of the
approach; the situation is the same as for the Schr\"{o}dinger equation: we can
only measure $\psi^2$, but the equation is written for $\psi$ itself.
Related papers
- Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise [38.551072383777594]
We study the problem of learning a single neuron with respect to the $L2$ loss in the presence of adversarial distribution shifts.
A new algorithm is developed to approximate the vector vector squared loss with respect to the worst distribution that is in the $chi2$divergence to the $mathcalp_0$.
arXiv Detail & Related papers (2024-11-11T03:43:52Z) - The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Partially Unitary Learning [0.0]
An optimal mapping between Hilbert spaces $IN$ of $left|psirightrangle$ and $OUT$ of $left|phirightrangle$ is presented.
An iterative algorithm for finding the global maximum of this optimization problem is developed.
arXiv Detail & Related papers (2024-05-16T17:13:55Z) - Towards verifications of Krylov complexity [0.0]
I present the exact and explicit expressions of the moments $mu_m$ for 16 quantum mechanical systems which are em exactly solvable both in the Schr"odinger and Heisenberg pictures.
arXiv Detail & Related papers (2024-03-11T02:57:08Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Quantum Oblivious LWE Sampling and Insecurity of Standard Model Lattice-Based SNARKs [4.130591018565202]
The Learning Errors With Errors ($mathsfLWE$) problem asks to find $mathbfs$ from an input of the form $(mathbfAmathbfs+mathbfe$)
We do not focus on solving $mathsfLWE$ but on the task of sampling instances.
Our main result is a quantum-time algorithm that samples well-distributed $mathsfLWE$ instances while provably not knowing the solution.
arXiv Detail & Related papers (2024-01-08T10:55:41Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Uncertainties in Quantum Measurements: A Quantum Tomography [52.77024349608834]
The observables associated with a quantum system $S$ form a non-commutative algebra $mathcal A_S$.
It is assumed that a density matrix $rho$ can be determined from the expectation values of observables.
Abelian algebras do not have inner automorphisms, so the measurement apparatus can determine mean values of observables.
arXiv Detail & Related papers (2021-12-14T16:29:53Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and
ReLUs under Gaussian Marginals [49.60752558064027]
We study the fundamental problems of agnostically learning halfspaces and ReLUs under Gaussian marginals.
Our lower bounds provide strong evidence that current upper bounds for these tasks are essentially best possible.
arXiv Detail & Related papers (2020-06-29T17:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.