SnareNet: Flexible Repair Layers for Neural Networks with Hard Constraints
- URL: http://arxiv.org/abs/2602.09317v1
- Date: Tue, 10 Feb 2026 01:24:32 GMT
- Title: SnareNet: Flexible Repair Layers for Neural Networks with Hard Constraints
- Authors: Ya-Chi Chu, Alkiviades Boukas, Madeleine Udell,
- Abstract summary: We propose SnareNet, a feasibility-controlled architecture for learning mappings.<n>It appends a differentiable repair layer that navigates in the constraint map's range space, steering toward feasibility and producing a repaired output that satisfies constraints to a user-specified tolerance.<n>It consistently attains improved objective quality while satisfying constraints more reliably than prior work.
- Score: 10.762945956640051
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural networks are increasingly used as surrogate solvers and control policies, but unconstrained predictions can violate physical, operational, or safety requirements. We propose SnareNet, a feasibility-controlled architecture for learning mappings whose outputs must satisfy input-dependent nonlinear constraints. SnareNet appends a differentiable repair layer that navigates in the constraint map's range space, steering iterates toward feasibility and producing a repaired output that satisfies constraints to a user-specified tolerance. To stabilize end-to-end training, we introduce adaptive relaxation, which designs a relaxed feasible set that snares the neural network at initialization and shrinks it into the feasible set, enabling early exploration and strict feasibility later in training. On optimization-learning and trajectory planning benchmarks, SnareNet consistently attains improved objective quality while satisfying constraints more reliably than prior work.
Related papers
- Adaptive Neighborhood-Constrained Q Learning for Offline Reinforcement Learning [52.03884701766989]
offline reinforcement learning (RL) algorithms typically impose constraints on action selection.<n>We propose a new neighborhood constraint that restricts action selection in the Bellman target to the union of neighborhoods of dataset actions.<n>We develop a simple yet effective algorithm, Adaptive Neighborhood-constrained Q learning (ANQ), to perform Q learning with target actions satisfying this constraint.
arXiv Detail & Related papers (2025-11-04T13:42:05Z) - Enforcing Hard Linear Constraints in Deep Learning Models with Decision Rules [8.098452803458253]
This paper proposes a model-agnostic framework for enforcing input-dependent linear equality and inequality constraints on neural network outputs.<n>The architecture combines a task network trained for prediction accuracy with a safe network trained using decision rules from the runtime and robust optimization to ensure feasibility across the entire input space.
arXiv Detail & Related papers (2025-05-20T03:09:44Z) - ENFORCE: Nonlinear Constrained Learning with Adaptive-depth Neural Projection [0.0]
We introduce ENFORCE, a neural network architecture that uses an adaptive projection module (AdaNP) to enforce nonlinear equality constraints in the predictions.<n>We prove that our projection mapping is 1-Lipschitz, making it well-suited for stable training.<n>The predictions of our new architecture satisfy $N_C$ equality constraints that are nonlinear in both the inputs and outputs of the neural network.
arXiv Detail & Related papers (2025-02-10T18:52:22Z) - HardNet: Hard-Constrained Neural Networks with Universal Approximation Guarantees [4.814150850377003]
HardNet is a framework for constructing neural networks that inherently satisfy hard constraints.<n>We show that HardNet retains neural networks' universal approximation capabilities.
arXiv Detail & Related papers (2024-10-14T17:59:24Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint
Satisfaction [8.29487992932196]
We present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions.
Our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
arXiv Detail & Related papers (2023-03-02T10:40:50Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Better Training using Weight-Constrained Stochastic Dynamics [0.0]
We employ constraints to control the parameter space of deep neural networks throughout training.
The use of customized, appropriately designed constraints can reduce the vanishing/exploding problem.
We provide a general approach to efficiently incorporate constraints into a gradient Langevin framework.
arXiv Detail & Related papers (2021-06-20T14:41:06Z) - Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive
Meta-Pruning [83.59005356327103]
A common limitation of most existing pruning techniques is that they require pre-training of the network at least once before pruning.
We propose STAMP, which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset.
We validate STAMP against recent advanced pruning methods on benchmark datasets.
arXiv Detail & Related papers (2020-06-22T10:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.