Going Beyond Approximation: Encoding Constraints for Explainable
Multi-hop Inference via Differentiable Combinatorial Solvers
- URL: http://arxiv.org/abs/2208.03339v1
- Date: Fri, 5 Aug 2022 18:07:53 GMT
- Title: Going Beyond Approximation: Encoding Constraints for Explainable
Multi-hop Inference via Differentiable Combinatorial Solvers
- Authors: Mokanarangan Thayaparan, Marco Valentino, Andr\'e Freitas
- Abstract summary: Linear Programming (ILP) provides a viable mechanism to encode explicit and controllable assumptions about explainable multi-hop inference with natural language.
An ILP formulation is non-differentiable and cannot be integrated into broader deep learning architectures.
Diff-Comb Explainer demonstrates improved accuracy and explainability over non-differentiable solvers, Transformers and existing differentiable constraint-based multi-hop inference frameworks.
- Score: 4.726777092009554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integer Linear Programming (ILP) provides a viable mechanism to encode
explicit and controllable assumptions about explainable multi-hop inference
with natural language. However, an ILP formulation is non-differentiable and
cannot be integrated into broader deep learning architectures. Recently,
Thayaparan et al. (2021a) proposed a novel methodology to integrate ILP with
Transformers to achieve end-to-end differentiability for complex multi-hop
inference. While this hybrid framework has been demonstrated to deliver better
answer and explanation selection than transformer-based and existing ILP
solvers, the neuro-symbolic integration still relies on a convex relaxation of
the ILP formulation, which can produce sub-optimal solutions. To improve these
limitations, we propose Diff-Comb Explainer, a novel neuro-symbolic
architecture based on Differentiable BlackBox Combinatorial solvers (DBCS)
(Pogan\v{c}i\'c et al., 2019). Unlike existing differentiable solvers, the
presented model does not require the transformation and relaxation of the
explicit semantic constraints, allowing for direct and more efficient
integration of ILP formulations. Diff-Comb Explainer demonstrates improved
accuracy and explainability over non-differentiable solvers, Transformers and
existing differentiable constraint-based multi-hop inference frameworks.
Related papers
- Differentiation Through Black-Box Quadratic Programming Solvers [16.543673072027136]
We introduce dQP, a modular framework that enables plug-and-play differentiation for any quadratic programming (QP) solver.
Our solution is based on the core theoretical insight that knowledge of the active constraint set at the QP optimum allows for explicit differentiation.
Our implementation, which will be made publicly available, interfaces with an existing framework that supports over 15 state-of-the-art QP solvers.
arXiv Detail & Related papers (2024-10-08T20:01:39Z) - Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond [58.39457881271146]
We introduce a novel framework of multi-armed bandits (CMAB) with multivariant and probabilistically triggering arms (CMAB-MT)
Compared with existing CMAB works, CMAB-MT not only enhances the modeling power but also allows improved results by leveraging distinct statistical properties for multivariant random variables.
Our framework can include many important problems as applications, such as episodic reinforcement learning (RL) and probabilistic maximum coverage for goods distribution.
arXiv Detail & Related papers (2024-06-03T14:48:53Z) - A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference [17.467900115986158]
We introduce Diff-Comb Explainer, a neuro-symbolic architecture for explanation-based Natural Language Inference (NLI)
Diff-Comb Explainer does not necessitate a continuous relaxation of the semantic constraints, enabling a direct, more precise, and efficient incorporation of neural representations into the ILP formulation.
Our experiments demonstrate that Diff-Comb Explainer achieves superior performance when compared to conventional ILP solvers, neuro-symbolic black-box solvers, and Transformer-based encoders.
arXiv Detail & Related papers (2024-04-03T10:29:06Z) - Efficient Alternating Minimization Solvers for Wyner Multi-View
Unsupervised Learning [0.0]
We propose two novel formulations that enable the development of computational efficient solvers based the alternating principle.
The proposed solvers offer computational efficiency, theoretical convergence guarantees, local minima complexity with the number of views, and exceptional accuracy as compared with the state-of-the-art techniques.
arXiv Detail & Related papers (2023-03-28T10:17:51Z) - Flexible Differentiable Optimization via Model Transformations [1.081463830315253]
We introduce DiffOpt, a Julia library to differentiate through the solution of optimization problems with respect to arbitrary parameters present in the objective and/or constraints.
arXiv Detail & Related papers (2022-06-10T09:59:13Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - Adaptive Discrete Communication Bottlenecks with Dynamic Vector
Quantization [76.68866368409216]
We propose learning to dynamically select discretization tightness conditioned on inputs.
We show that dynamically varying tightness in communication bottlenecks can improve model performance on visual reasoning and reinforcement learning tasks.
arXiv Detail & Related papers (2022-02-02T23:54:26Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Efficient and Modular Implicit Differentiation [68.74748174316989]
We propose a unified, efficient and modular approach for implicit differentiation of optimization problems.
We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily.
arXiv Detail & Related papers (2021-05-31T17:45:58Z) - $\partial$-Explainer: Abductive Natural Language Inference via
Differentiable Convex Optimization [2.624902795082451]
This paper presents a novel framework named $partial$-Explainer (Diff-Explainer) that combines the best of both worlds by casting the constrained optimization as part of a deep neural network.
Our experiments show up to $approx 10%$ improvement over non-differentiable solver while still providing explanations for supporting its inference.
arXiv Detail & Related papers (2021-05-07T17:49:19Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.