Deep Learning Framework From Scratch Using Numpy
- URL: http://arxiv.org/abs/2011.08461v1
- Date: Tue, 17 Nov 2020 06:28:05 GMT
- Title: Deep Learning Framework From Scratch Using Numpy
- Authors: Andrei Nicolae
- Abstract summary: This work is a rigorous development of a complete and general-purpose deep learning framework from the ground up.
The fundamental components of deep learning are developed from elementary calculus and implemented in a sensible object-oriented approach using only Python and the Numpy library.
Demonstrations of solved problems using the framework, named ArrayFlow, include a computer vision classification task, solving for the shape of a catenary, and a 2nd order differential equation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work is a rigorous development of a complete and general-purpose deep
learning framework from the ground up. The fundamental components of deep
learning - automatic differentiation and gradient methods of optimizing
multivariable scalar functions - are developed from elementary calculus and
implemented in a sensible object-oriented approach using only Python and the
Numpy library. Demonstrations of solved problems using the framework, named
ArrayFlow, include a computer vision classification task, solving for the shape
of a catenary, and a 2nd order differential equation.
Related papers
- Automated Sizing and Training of Efficient Deep Autoencoders using
Second Order Algorithms [0.46040036610482665]
We propose a multi-step training method for generalized linear classifiers.
validation error is minimized by pruning of unnecessary inputs.
desired outputs are improved via a method similar to the Ho-Kashyap rule.
arXiv Detail & Related papers (2023-08-11T16:48:31Z) - Oflib: Facilitating Operations with and on Optical Flow Fields in Python [5.936095386978232]
We present a theoretical framework for the characterisation and manipulation of optical flow, i.e. 2D vector fields, in the context of their use in motion estimation algorithms and beyond.
This structured approach is then used as the foundation for an implementation in Python 3, with the fully differentiable PyTorch version oflibpytorch supporting back-propagation as required for deep learning.
We verify the flow composition method empirically and provide a working example for its application to optical flow ground truth in synthetic training data creation.
arXiv Detail & Related papers (2022-10-11T17:28:10Z) - PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on
Point Clouds [33.41204351513122]
PAConv is a generic convolution operation for 3D point cloud processing.
The kernel is built in a data-driven manner, endowing PAConv with more flexibility than 2D convolutions.
Even built on simple networks, our method still approaches or even surpasses the state-of-the-art models.
arXiv Detail & Related papers (2021-03-26T17:52:38Z) - Composable Learning with Sparse Kernel Representations [110.19179439773578]
We present a reinforcement learning algorithm for learning sparse non-parametric controllers in a Reproducing Kernel Hilbert Space.
We improve the sample complexity of this approach by imposing a structure of the state-action function through a normalized advantage function.
We demonstrate the performance of this algorithm on learning obstacle-avoidance policies in multiple simulations of a robot equipped with a laser scanner while navigating in a 2D environment.
arXiv Detail & Related papers (2021-03-26T13:58:23Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - A survey on Kornia: an Open Source Differentiable Computer Vision
Library for PyTorch [0.0]
This work presents Kornia, an open source computer vision library built upon a set of differentiable routines and modules that aims to solve generic computer vision problems.
The package uses PyTorch as its main backend, not only for efficiency but also to take advantage of the reverse auto-differentiation engine to define and compute the gradient of complex functions.
arXiv Detail & Related papers (2020-09-21T08:48:28Z) - MKLpy: a python-based framework for Multiple Kernel Learning [4.670305538969914]
We introduce MKLpy, a python-based framework for Multiple Kernel Learning.
The library provides Multiple Kernel Learning algorithms for classification tasks, mechanisms to compute kernel functions for different data types, and evaluation strategies.
arXiv Detail & Related papers (2020-07-20T10:10:13Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z) - PolyScientist: Automatic Loop Transformations Combined with Microkernels
for Optimization of Deep Learning Primitives [55.79741270235602]
We develop a hybrid solution to the development of deep learning kernels.
We use the advanced polyhedral technology to automatically tune the outer loops for performance.
arXiv Detail & Related papers (2020-02-06T08:02:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.