Differentiable Spline Approximations
- URL: http://arxiv.org/abs/2110.01532v1
- Date: Mon, 4 Oct 2021 16:04:46 GMT
- Title: Differentiable Spline Approximations
- Authors: Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit
Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy,
Chinmay Hegde
- Abstract summary: Differentiable programming has significantly enhanced the scope of machine learning.
Standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable.
We show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications.
- Score: 48.10988598845873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paradigm of differentiable programming has significantly enhanced the
scope of machine learning via the judicious use of gradient-based optimization.
However, standard differentiable programming methods (such as autodiff)
typically require that the machine learning models be differentiable, limiting
their applicability. Our goal in this paper is to use a new, principled
approach to extend gradient-based optimization to functions well modeled by
splines, which encompass a large family of piecewise polynomial models. We
derive the form of the (weak) Jacobian of such functions and show that it
exhibits a block-sparse structure that can be computed implicitly and
efficiently. Overall, we show that leveraging this redesigned Jacobian in the
form of a differentiable "layer" in predictive models leads to improved
performance in diverse applications such as image segmentation, 3D point cloud
reconstruction, and finite element analysis.
Related papers
- Ensemble architecture in polyp segmentation [0.0]
This study explores the architecture of semantic segmentation and evaluated models that excel in polyp segmentation.
We present an integrated framework that harnesses the advantages of different models to attain an optimal outcome.
arXiv Detail & Related papers (2024-08-14T02:57:38Z) - Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation [0.9208007322096533]
Piecewise Polynomials (PPs) are utilized in several engineering disciplines, like trajectory planning, to approximate position profiles given in the form of a set of points.
arXiv Detail & Related papers (2024-03-13T14:34:34Z) - A differentiable programming framework for spin models [0.0]
We introduce a novel framework for simulating spin models using differentiable programming.
We focus on three distinct spin systems: the Ising model, the Potts model, and the Cellular Potts model.
arXiv Detail & Related papers (2023-04-04T13:04:21Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Efficient and Modular Implicit Differentiation [68.74748174316989]
We propose a unified, efficient and modular approach for implicit differentiation of optimization problems.
We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily.
arXiv Detail & Related papers (2021-05-31T17:45:58Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z) - Differentiable Segmentation of Sequences [2.1485350418225244]
We build on advances in learning continuous warping functions and propose a novel family of warping functions based on the two-sided power (TSP) distribution.
Our formulation includes the important class of segmented generalized linear models as a special case.
We use our approach to model the spread of COVID-19 with Poisson regression, apply it on a change point detection task, and learn classification models with concept drift.
arXiv Detail & Related papers (2020-06-23T15:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.