tn4ml: Tensor Network Training and Customization for Machine Learning
- URL: http://arxiv.org/abs/2502.13090v1
- Date: Tue, 18 Feb 2025 17:57:29 GMT
- Title: tn4ml: Tensor Network Training and Customization for Machine Learning
- Authors: Ema Puljak, Sergio Sanchez-Ramirez, Sergi Masot-Llima, Jofre Vallès-Muns, Artur Garcia-Saez, Maurizio Pierini,
- Abstract summary: tn4ml is a novel library designed to seamlessly integrate Networks into Machine Learning tasks.
Inspired by existing Machine Learning frameworks, the library offers a user-friendly structure with modules for data embedding, objective function definition, and model training.
- Score: 0.8799686507544172
- License:
- Abstract: Tensor Networks have emerged as a prominent alternative to neural networks for addressing Machine Learning challenges in foundational sciences, paving the way for their applications to real-life problems. This paper introduces tn4ml, a novel library designed to seamlessly integrate Tensor Networks into optimization pipelines for Machine Learning tasks. Inspired by existing Machine Learning frameworks, the library offers a user-friendly structure with modules for data embedding, objective function definition, and model training using diverse optimization strategies. We demonstrate its versatility through two examples: supervised learning on tabular data and unsupervised learning on an image dataset. Additionally, we analyze how customizing the parts of the Machine Learning pipeline for Tensor Networks influences performance metrics.
Related papers
- Feature Network Methods in Machine Learning and Applications [0.0]
A machine learning (ML) feature network is a graph that connects ML features in learning tasks based on their similarity.
We provide an example of a deep tree-structured feature network, where hierarchical connections are formed through feature clustering and feed-forward learning.
arXiv Detail & Related papers (2024-01-10T01:57:12Z) - netFound: Foundation Model for Network Security [10.84029318509573]
This paper introduces a novel transformer-based network foundation model, netFound.
We employ self-supervised learning techniques on abundant, unlabeled network telemetry data for pre-training.
Our results demonstrate that netFound effectively captures the hidden networking context in production settings.
arXiv Detail & Related papers (2023-10-25T22:04:57Z) - A Cloud-based Machine Learning Pipeline for the Efficient Extraction of
Insights from Customer Reviews [0.0]
We present a cloud-based system that can extract insights from customer reviews using machine learning methods integrated into a pipeline.
For topic modeling, our composite model uses transformer-based neural networks designed for natural language processing.
Our system can achieve better results than this task's existing topic modeling and keyword extraction solutions.
arXiv Detail & Related papers (2023-06-13T14:07:52Z) - PDSketch: Integrated Planning Domain Programming and Learning [86.07442931141637]
We present a new domain definition language, named PDSketch.
It allows users to flexibly define high-level structures in the transition models.
Details of the transition model will be filled in by trainable neural networks.
arXiv Detail & Related papers (2023-03-09T18:54:12Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Learning Purified Feature Representations from Task-irrelevant Labels [18.967445416679624]
We propose a novel learning framework called PurifiedLearning to exploit task-irrelevant features extracted from task-irrelevant labels.
Our work is built on solid theoretical analysis and extensive experiments, which demonstrate the effectiveness of PurifiedLearning.
arXiv Detail & Related papers (2021-02-22T12:50:49Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Exploring Flip Flop memories and beyond: training recurrent neural
networks with key insights [0.0]
We study the implementation of a temporal processing task, specifically a 3-bit Flip Flop memory.
The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools.
arXiv Detail & Related papers (2020-10-15T16:25:29Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.