Neural Fields with Hard Constraints of Arbitrary Differential Order
- URL: http://arxiv.org/abs/2306.08943v2
- Date: Sun, 29 Oct 2023 22:11:39 GMT
- Title: Neural Fields with Hard Constraints of Arbitrary Differential Order
- Authors: Fangcheng Zhong, Kyle Fogarty, Param Hanji, Tianhao Wu, Alejandro
Sztrajman, Andrew Spielberg, Andrea Tagliasacchi, Petra Bosilj, Cengiz
Oztireli
- Abstract summary: We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
- Score: 61.49418682745144
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While deep learning techniques have become extremely popular for solving a
broad range of optimization problems, methods to enforce hard constraints
during optimization, particularly on deep neural networks, remain
underdeveloped. Inspired by the rich literature on meshless interpolation and
its extension to spectral collocation methods in scientific computing, we
develop a series of approaches for enforcing hard constraints on neural fields,
which we refer to as Constrained Neural Fields (CNF). The constraints can be
specified as a linear operator applied to the neural field and its derivatives.
We also design specific model representations and training strategies for
problems where standard models may encounter difficulties, such as conditioning
of the system, memory consumption, and capacity of the network when being
constrained. Our approaches are demonstrated in a wide range of real-world
applications. Additionally, we develop a framework that enables highly
efficient model and constraint specification, which can be readily applied to
any downstream task where hard constraints need to be explicitly satisfied
during optimization.
Related papers
- Hard-Constrained Neural Networks with Universal Approximation Guarantees [5.3663546125491735]
HardNet is a framework for constructing neural networks that inherently satisfy hard constraints without sacrificing model capacity.
We show that HardNet retains the universal approximation capabilities of neural networks.
arXiv Detail & Related papers (2024-10-14T17:59:24Z) - Deep Neural Network for Constraint Acquisition through Tailored Loss
Function [0.0]
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving.
This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression.
arXiv Detail & Related papers (2024-03-04T13:47:33Z) - Scaling physics-informed hard constraints with mixture-of-experts [0.0]
We develop a scalable approach to enforce hard physical constraints using Mixture-of-Experts (MoE)
MoE imposes the constraint over smaller domains, each of which is solved by an "expert" through differentiable optimization.
Compared to standard differentiable optimization, our scalable approach achieves greater accuracy in the neural PDE solver setting.
arXiv Detail & Related papers (2024-02-20T22:45:00Z) - NITO: Neural Implicit Fields for Resolution-free Topology Optimization [7.338114424386579]
Topology optimization is a critical task in engineering design, where the goal is to optimally distribute material in a given space.
We introduce Neural Implicit Topology Optimization (NITO), a novel approach to accelerate topology optimization problems using deep learning.
NITO synthesizes structures with up to seven times better structural efficiency compared to SOTA diffusion models.
arXiv Detail & Related papers (2024-02-07T18:27:29Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - Constrained Empirical Risk Minimization: Theory and Practice [2.4934936799100034]
We present a framework that allows the exact enforcement of constraints on parameterized sets of functions such as Deep Neural Networks (DNNs)
We focus on constraints that are outside the scope of equivariant networks used in Geometric Deep Learning.
As a major example of the framework, we restrict filters of a Convolutional Neural Network (CNN) to be wavelets, and apply these wavelet networks to the task of contour prediction in the medical domain.
arXiv Detail & Related papers (2023-02-09T16:11:58Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.