HyperNOs: Automated and Parallel Library for Neural Operators Research
- URL: http://arxiv.org/abs/2503.18087v1
- Date: Sun, 23 Mar 2025 14:39:58 GMT
- Title: HyperNOs: Automated and Parallel Library for Neural Operators Research
- Authors: Massimiliano Ghiotto,
- Abstract summary: HyperNOs is a PyTorch library designed to streamline and automate the process of exploring neural operators.<n>HyperNOs takes advantage of state-of-the-art optimization algorithms and parallel computing.<n>The library is designed to be easy to use with the provided model and datasets, but also to be easily extended to use new datasets and custom neural operator architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces HyperNOs, a PyTorch library designed to streamline and automate the process of exploring neural operators, with a special focus on hyperparameter optimization for comprehensive and exhaustive exploration. Indeed, HyperNOs takes advantage of state-of-the-art optimization algorithms and parallel computing implemented in the Ray-tune library to efficiently explore the hyperparameter space of neural operators. We also implement many useful functionalities for studying neural operators with a user-friendly interface, such as the possibility to train the model with a fixed number of parameters or to train the model with multiple datasets and different resolutions. We integrate Fourier neural operators and convolutional neural operators in our library, achieving state of the art results on many representative benchmarks, demonstrating the capabilities of HyperNOs to handle real datasets and modern architectures. The library is designed to be easy to use with the provided model and datasets, but also to be easily extended to use new datasets and custom neural operator architectures.
Related papers
- A Library for Learning Neural Operators [77.16483961863808]
We present NeuralOperator, an open-source Python library for operator learning.<n> Neural operators generalize neural networks to maps between function spaces instead of finite-dimensional Euclidean spaces.<n>Built on top of PyTorch, NeuralOperator provides all the tools for training and deploying neural operator models.
arXiv Detail & Related papers (2024-12-13T18:49:37Z) - Convolutional Neural Operators for robust and accurate learning of PDEs [11.562748612983956]
We present novel adaptations for convolutional neural networks to process functions as inputs and outputs.
The resulting architecture is termed as convolutional neural operators (CNOs)
We prove a universality theorem to show that CNOs can approximate operators arising in PDEs to desired accuracy.
arXiv Detail & Related papers (2023-02-02T15:54:45Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - FastONN -- Python based open-source GPU implementation for Operational
Neural Networks [25.838282412957675]
This work introduces a fast GPU-enabled library for training operational neural networks, FastONN.
FastONN is based on a novel vectorized formulation of the operational neurons.
bundled auxiliary modules offer interfaces for performance tracking and checkpointing across different data partitions and customized metrics.
arXiv Detail & Related papers (2020-06-03T13:33:35Z) - Self-Organized Operational Neural Networks with Generative Neurons [87.32169414230822]
ONNs are heterogenous networks with a generalized neuron model that can encapsulate any set of non-linear operators.
We propose Self-organized ONNs (Self-ONNs) with generative neurons that have the ability to adapt (optimize) the nodal operator of each connection.
arXiv Detail & Related papers (2020-04-24T14:37:56Z) - PHOTONAI -- A Python API for Rapid Machine Learning Model Development [2.414341608751139]
PHOTONAI is a high-level Python API designed to simplify and accelerate machine learning model development.
It functions as a unifying framework allowing the user to easily access and combine algorithms from different toolboxes into custom algorithm sequences.
arXiv Detail & Related papers (2020-02-13T10:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.