Build your own tensor network library: DMRjulia I. Basic library for the
density matrix renormalization group
- URL: http://arxiv.org/abs/2109.03120v1
- Date: Tue, 7 Sep 2021 14:31:47 GMT
- Title: Build your own tensor network library: DMRjulia I. Basic library for the
density matrix renormalization group
- Authors: Thomas E. Baker and Martin P. Thompson
- Abstract summary: The focus of this code is on basic operations involved in tensor network computations.
The code is fast enough to be used in research and can be used to make new algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An introduction to the density matrix renormalization group is contained
here, including coding examples. The focus of this code is on basic operations
involved in tensor network computations, and this forms the foundation of the
DMRjulia library. Algorithmic complexity, measurements from the matrix product
state, convergence to the ground state, and other relevant features are also
discussed. The present document covers the implementation of operations for
dense tensors into the Julia language. The code can be used as an educational
tool to understand how tensor network computations are done in the context of
entanglement renormalization or as a template for other codes in low level
languages. A comprehensive Supplemental Material is meant to be a "Numerical
Recipes" style introduction to the core functions and a simple implementation
of them. The code is fast enough to be used in research and can be used to make
new algorithms.
Related papers
- Distributive Pre-Training of Generative Modeling Using Matrix-Product
States [0.0]
We consider an alternative training scheme utilizing basic tensor network operations, e.g., summation and compression.
The training algorithm is based on compressing the superposition state constructed from all the training data in product state representation.
We benchmark the algorithm on the MNIST dataset and show reasonable results for generating new images and classification tasks.
arXiv Detail & Related papers (2023-06-26T15:46:08Z) - Learning Implicit Feature Alignment Function for Semantic Segmentation [51.36809814890326]
Implicit Feature Alignment function (IFA) is inspired by the rapidly expanding topic of implicit neural representations.
We show that IFA implicitly aligns the feature maps at different levels and is capable of producing segmentation maps in arbitrary resolutions.
Our method can be combined with improvement on various architectures, and it achieves state-of-the-art accuracy trade-off on common benchmarks.
arXiv Detail & Related papers (2022-06-17T09:40:14Z) - Stack operation of tensor networks [10.86105335102537]
We propose a mathematically rigorous definition for the tensor network stack approach.
We illustrate the main ideas with the matrix product states based machine learning as an example.
arXiv Detail & Related papers (2022-03-28T12:45:13Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - DMRjulia: Tensor recipes for entanglement renormalization computations [0.0]
Detailed notes on the functions included in the DMRjulia library are included here.
This document presently covers the implementation of the functions in the tensor network library for dense tensors.
arXiv Detail & Related papers (2021-11-29T13:41:59Z) - Local tensor-network codes [0.0]
We show how to write some topological codes, including the surface code and colour code, as simple tensor-network codes.
We prove that this method is efficient in the case of holographic codes.
arXiv Detail & Related papers (2021-09-24T14:38:06Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - Neural networks behave as hash encoders: An empirical study [79.38436088982283]
The input space of a neural network with ReLU-like activations is partitioned into multiple linear regions.
We demonstrate that this partition exhibits the following encoding properties across a variety of deep learning models.
Simple algorithms, such as $K$-Means, $K$-NN, and logistic regression, can achieve fairly good performance on both training and test data.
arXiv Detail & Related papers (2021-01-14T07:50:40Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.