A Probabilistic Neuro-symbolic Layer for Algebraic Constraint Satisfaction
- URL: http://arxiv.org/abs/2503.19466v3
- Date: Mon, 16 Jun 2025 10:05:22 GMT
- Title: A Probabilistic Neuro-symbolic Layer for Algebraic Constraint Satisfaction
- Authors: Leander Kurscheidt, Paolo Morettin, Roberto Sebastiani, Andrea Passerini, Antonio Vergari,
- Abstract summary: In safety-critical applications, satisfaction of constraints linear continuous environments is crucial, e.g., an autonomous agent should never crash or go off-road.<n>We introduce different approximations that guarantee the satisfaction of non-junctionable constraints.<n>This formulation enables efficient and exact renormalization via symbolic integration.
- Score: 13.245011236407166
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In safety-critical applications, guaranteeing the satisfaction of constraints over continuous environments is crucial, e.g., an autonomous agent should never crash into obstacles or go off-road. Neural models struggle in the presence of these constraints, especially when they involve intricate algebraic relationships. To address this, we introduce a differentiable probabilistic layer that guarantees the satisfaction of non-convex algebraic constraints over continuous variables. This probabilistic algebraic layer (PAL) can be seamlessly plugged into any neural architecture and trained via maximum likelihood without requiring approximations. PAL defines a distribution over conjunctions and disjunctions of linear inequalities, parameterized by polynomials. This formulation enables efficient and exact renormalization via symbolic integration, which can be amortized across different data points and easily parallelized on a GPU. We showcase PAL and our integration scheme on a number of benchmarks for algebraic constraint integration and on real-world trajectory data.
Related papers
- Semi-Explicit Neural DAEs: Learning Long-Horizon Dynamical Systems with Algebraic Constraints [2.66269503676104]
We propose a method that explicitly enforces algebraic constraints by projecting each ODE step onto the constraint manifold.<n>PNODEs consistently outperform baselines across six benchmark problems achieving a mean constraint violation error below $10-10$.<n>These results show that constraint projection offers a simple strategy for learning physically consistent long-horizon dynamics.
arXiv Detail & Related papers (2025-05-26T20:31:15Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.<n>Non-smooth regularization is often incorporated into machine learning tasks.<n>We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - FedCanon: Non-Convex Composite Federated Learning with Efficient Proximal Operation on Heterogeneous Data [17.80715992954134]
Composite learning offers a general framework for solving machine learning problems with additional regularization terms.
We propose FedCanon algorithm to solve possibly non-smooth regularization problems.
arXiv Detail & Related papers (2025-04-16T09:28:26Z) - Constrained Machine Learning Through Hyperspherical Representation [4.129133569151574]
We present a novel method to enforce constraints in the output space for convex and bounded feasibility regions.
Our method has predictive performance comparable to the other approaches, can guarantee 100% constraint satisfaction, and has a minimal computational cost at inference time.
arXiv Detail & Related papers (2025-04-11T10:19:49Z) - TD(0) Learning converges for Polynomial mixing and non-linear functions [49.1574468325115]
We present theoretical findings for TD learning under more applicable assumptions.<n>This is the first proof of TD(0) convergence on Markov data under universal and-independent step sizes.<n>Our results include bounds for linear models and non-linear under generalized gradients and H"older continuity.
arXiv Detail & Related papers (2025-02-08T22:01:02Z) - Gradient-Free Generation for Hard-Constrained Systems [41.558608119074755]
Existing constrained generative models rely heavily on gradient information, which is often sparse or computationally expensive in some fields.<n>We introduce a novel framework for adapting pre-trained, unconstrained flow-matching models to satisfy constraints exactly in a zero-shot manner.
arXiv Detail & Related papers (2024-12-02T18:36:26Z) - Scaling physics-informed hard constraints with mixture-of-experts [0.0]
We develop a scalable approach to enforce hard physical constraints using Mixture-of-Experts (MoE)
MoE imposes the constraint over smaller domains, each of which is solved by an "expert" through differentiable optimization.
Compared to standard differentiable optimization, our scalable approach achieves greater accuracy in the neural PDE solver setting.
arXiv Detail & Related papers (2024-02-20T22:45:00Z) - Probabilistic Exponential Integrators [36.98314810594263]
Like standard solvers, they suffer performance penalties for certain stiff systems.
This paper develops a class of probabilistic exponential solvers with favorable properties.
We evaluate the proposed methods on multiple stiff differential equations.
arXiv Detail & Related papers (2023-05-24T10:13:13Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.