Phase Detection with Neural Networks: Interpreting the Black Box
- URL: http://arxiv.org/abs/2004.04711v3
- Date: Thu, 12 Nov 2020 13:37:36 GMT
- Title: Phase Detection with Neural Networks: Interpreting the Black Box
- Authors: Anna Dawid, Patrick Huembeli, Micha{\l} Tomza, Maciej Lewenstein,
Alexandre Dauphin
- Abstract summary: Neural networks (NNs) usually hinder any insight into the reasoning behind their predictions.
We demonstrate how influence functions can unravel the black box of NN when trained to predict the phases of the one-dimensional extended spinless Fermi-Hubbard model at half-filling.
- Score: 58.720142291102135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks (NNs) usually hinder any insight into the reasoning behind
their predictions. We demonstrate how influence functions can unravel the black
box of NN when trained to predict the phases of the one-dimensional extended
spinless Fermi-Hubbard model at half-filling. Results provide strong evidence
that the NN correctly learns an order parameter describing the quantum
transition in this model. We demonstrate that influence functions allow to
check that the network, trained to recognize known quantum phases, can predict
new unknown ones within the data set. Moreover, we show they can guide
physicists in understanding patterns responsible for the phase transition. This
method requires no a priori knowledge on the order parameter, has no dependence
on the NN's architecture or the underlying physical model, and is therefore
applicable to a broad class of physical models or experimental data.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Characterizing out-of-distribution generalization of neural networks: application to the disordered Su-Schrieffer-Heeger model [38.79241114146971]
We show how interpretability methods can increase trust in predictions of a neural network trained to classify quantum phases.
In particular, we show that we can ensure better out-of-distribution generalization in the complex classification problem.
This work is an example of how the systematic use of interpretability methods can improve the performance of NNs in scientific problems.
arXiv Detail & Related papers (2024-06-14T13:24:32Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Analysis of Neural Network Predictions for Entanglement Self-Catalysis [0.0]
We investigate whether distinct models of neural networks can learn how to detect and self-catalysis of entanglement.
We also study whether a trained machine can detect another related phenomenon.
arXiv Detail & Related papers (2021-12-29T14:18:45Z) - Exploring Quantum Perceptron and Quantum Neural Network structures with
a teacher-student scheme [0.0]
Near-term quantum devices can be used to build quantum machine learning models, such as quantum kernel methods and quantum neural networks (QNN) to perform classification tasks.
The aim of this work is to systematically compare different QNN architectures and to evaluate their relative expressive power with a teacher-student scheme.
We focus particularly on a quantum perceptron model inspired by the recent work of Tacchino et. al. citeTacchino1 and compare it to the data re-uploading scheme that was originally introduced by P'erez-Salinas et. al. cite
arXiv Detail & Related papers (2021-05-04T13:13:52Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Linear Frequency Principle Model to Understand the Absence of
Overfitting in Neural Networks [4.86119220344659]
We show that low frequency dominance of target functions is the key condition for the non-overfitting of NNs.
Through an ideal two-layer NN, we unravel how detailed microscopic NN training dynamics statistically gives rise to a LFP model with quantitative prediction power.
arXiv Detail & Related papers (2021-01-30T10:11:37Z) - Learning Potentials of Quantum Systems using Deep Neural Networks [6.270305440413689]
NNs can learn classical Hamiltonian mechanics.
Can NNs be endowed with inductive biases through observation as means to provide insights into quantum phenomena?
arXiv Detail & Related papers (2020-06-23T20:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.