cito: An R package for training neural networks using torch
- URL: http://arxiv.org/abs/2303.09599v3
- Date: Wed, 24 Jan 2024 13:24:48 GMT
- Title: cito: An R package for training neural networks using torch
- Authors: Christian Amesoeder, Florian Hartig, Maximilian Pichler
- Abstract summary: 'cito' is a user-friendly R package for deep learning (DL) applications.
It allows specifying DNNs in the familiar formula syntax used by many R packages.
'cito' includes many user-friendly functions for model plotting and analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNN) have become a central method in ecology. Most
current deep learning (DL) applications rely on one of the major deep learning
frameworks, in particular Torch or TensorFlow, to build and train DNN. Using
these frameworks, however, requires substantially more experience and time than
typical regression functions in the R environment. Here, we present 'cito', a
user-friendly R package for DL that allows specifying DNNs in the familiar
formula syntax used by many R packages. To fit the models, 'cito' uses 'torch',
taking advantage of the numerically optimized torch library, including the
ability to switch between training models on the CPU or the graphics processing
unit (GPU) (which allows to efficiently train large DNN). Moreover, 'cito'
includes many user-friendly functions for model plotting and analysis,
including optional confidence intervals (CIs) based on bootstraps for
predictions and explainable AI (xAI) metrics for effect sizes and variable
importance with CIs and p-values. To showcase a typical analysis pipeline using
'cito', including its built-in xAI features to explore the trained DNN, we
build a species distribution model of the African elephant. We hope that by
providing a user-friendly R framework to specify, deploy and interpret DNN,
'cito' will make this interesting model class more accessible to ecological
data analysis. A stable version of 'cito' can be installed from the
comprehensive R archive network (CRAN).
Related papers
- Simulation-based inference with the Python Package sbijax [0.7499722271664147]
sbijax is a Python package that implements a wide variety of state-of-the-art methods in neural simulation-based inference.
The package provides functionality for approximate Bayesian computation, to compute model diagnostics, and to automatically estimate summary statistics.
arXiv Detail & Related papers (2024-09-28T18:47:13Z) - A Model-based GNN for Learning Precoding [37.060397377445504]
Learning precoding policies with neural networks enables low complexity online implementation, robustness to channel impairments, and joint optimization with channel acquisition.
Existing neural networks suffer from high training complexity and poor generalization ability when they are used to learn to optimize precoding for mitigating multi-user interference.
We propose a graph neural network (GNN) to learn precoding policies by harnessing both the mathematical model and the property of the policies.
arXiv Detail & Related papers (2022-12-01T20:40:38Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - PyRCN: Exploration and Application of ESNs [0.0]
Echo State Networks (ESNs) are capable of solving temporal tasks, but with a substantially easier training paradigm based on linear regression.
This paper aims to facilitate the understanding of ESNs in theory and practice.
The paper introduces the Python toolbox PyRCN for developing, training and analyzing ESNs on arbitrarily large datasets.
arXiv Detail & Related papers (2021-03-08T15:00:48Z) - FuncNN: An R Package to Fit Deep Neural Networks Using Generalized Input
Spaces [0.0]
The functional neural network (FuncNN) library is the first such package in any programming language.
This paper introduces functions that provide users an avenue to easily build models, generate predictions, and run cross-validations.
arXiv Detail & Related papers (2020-09-18T22:32:29Z) - A Fortran-Keras Deep Learning Bridge for Scientific Computing [6.768544973019004]
We introduce a software library, the Fortran-Keras Bridge (FKB)
The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles.
The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation.
arXiv Detail & Related papers (2020-04-14T15:10:09Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.