Differentiable programming: Generalization, characterization and
limitations of deep learning
- URL: http://arxiv.org/abs/2205.06898v1
- Date: Fri, 13 May 2022 21:23:57 GMT
- Title: Differentiable programming: Generalization, characterization and
limitations of deep learning
- Authors: Adri\'an Hern\'andez, Gilles Millerioux and Jos\'e M. Amig\'o
- Abstract summary: We define differentiable programming, as well as specify some program characteristics that allow us to incorporate the structure of the problem in a differentiable program.
We analyze different types of differentiable programs, from more general to more specific, and evaluate, for a specific problem with a graph dataset.
- Score: 0.47791962198275073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past years, deep learning models have been successfully applied in
several cognitive tasks. Originally inspired by neuroscience, these models are
specific examples of differentiable programs. In this paper we define and
motivate differentiable programming, as well as specify some program
characteristics that allow us to incorporate the structure of the problem in a
differentiable program. We analyze different types of differentiable programs,
from more general to more specific, and evaluate, for a specific problem with a
graph dataset, its structure and knowledge with several differentiable programs
using those characteristics. Finally, we discuss some inherent limitations of
deep learning and differentiable programs, which are key challenges in
advancing artificial intelligence, and then analyze possible solutions
Related papers
- The Elements of Differentiable Programming [14.197724178748176]
Differentiable programming enables end-to-end differentiation of complex computer programs.
Differentiable programming builds upon several areas of computer science and applied mathematics.
arXiv Detail & Related papers (2024-03-21T17:55:16Z) - $\omega$PAP Spaces: Reasoning Denotationally About Higher-Order,
Recursive Probabilistic and Differentiable Programs [64.25762042361839]
$omega$PAP spaces are spaces for reasoning denotationally about expressive differentiable and probabilistic programming languages.
Our semantics is general enough to assign meanings to most practical probabilistic and differentiable programs.
We establish the almost-everywhere differentiability of probabilistic programs' trace density functions.
arXiv Detail & Related papers (2023-02-21T12:50:05Z) - Symbolic Regression for Space Applications: Differentiable Cartesian
Genetic Programming Powered by Multi-objective Memetic Algorithms [10.191757341020216]
We propose a new multi-objective memetic algorithm that exploits a differentiable Cartesian Genetic Programming encoding to learn constants during evolutionary loops.
We show that this approach is competitive or outperforms machine learned black box regression models or hand-engineered fits for two applications from space: the Mars express thermal power estimation and the determination of the age of stars by gyrochronology.
arXiv Detail & Related papers (2022-06-13T14:44:15Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Gradients are Not All You Need [28.29420710601308]
We discuss a common chaos based failure mode which appears in a variety of differentiable circumstances.
We trace this failure to the spectrum of the Jacobian of the system under study, and provide criteria for when a practitioner might expect this failure to spoil their differentiation based optimization algorithms.
arXiv Detail & Related papers (2021-11-10T16:51:04Z) - Differentiable Spline Approximations [48.10988598845873]
Differentiable programming has significantly enhanced the scope of machine learning.
Standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable.
We show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications.
arXiv Detail & Related papers (2021-10-04T16:04:46Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z) - On the Nature of Programming Exercises [0.0]
It is essential to understand that the nature of a programming exercise is an important factor for the success and consistent learning.
This paper explores different approaches on the creation of a programming exercise.
arXiv Detail & Related papers (2020-06-25T15:22:26Z) - Learning to Stop While Learning to Predict [85.7136203122784]
Many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs.
Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances.
In this paper, we tackle this varying depth problem using a steerable architecture.
We show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks.
arXiv Detail & Related papers (2020-06-09T07:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.