Multitask Kernel-based Learning with First-Order Logic Constraints
- URL: http://arxiv.org/abs/2311.03340v3
- Date: Mon, 5 Feb 2024 11:49:07 GMT
- Title: Multitask Kernel-based Learning with First-Order Logic Constraints
- Authors: Michelangelo Diligenti, Marco Gori, Marco Maggini and Leonardo
Rigutini
- Abstract summary: We consider a multi-task learning scheme where multiple predicates defined on a set of objects are to be jointly learned from examples.
A general approach is presented to convert the FOL clauses into a continuous implementation that can deal with the outputs computed by the kernel-based predicates.
- Score: 13.70920563542248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a general framework to integrate supervised and
unsupervised examples with background knowledge expressed by a collection of
first-order logic clauses into kernel machines. In particular, we consider a
multi-task learning scheme where multiple predicates defined on a set of
objects are to be jointly learned from examples, enforcing a set of FOL
constraints on the admissible configurations of their values. The predicates
are defined on the feature spaces, in which the input objects are represented,
and can be either known a priori or approximated by an appropriate kernel-based
learner. A general approach is presented to convert the FOL clauses into a
continuous implementation that can deal with the outputs computed by the
kernel-based predicates. The learning problem is formulated as a
semi-supervised task that requires the optimization in the primal of a loss
function that combines a fitting loss measure on the supervised examples, a
regularization term, and a penalty term that enforces the constraints on both
the supervised and unsupervised examples. Unfortunately, the penalty term is
not convex and it can hinder the optimization process. However, it is possible
to avoid poor solutions by using a two stage learning schema, in which the
supervised examples are learned first and then the constraints are enforced.
Related papers
- Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Multitask Kernel-based Learning with Logic Constraints [13.70920563542248]
This paper presents a framework to integrate prior knowledge in the form of logic constraints among a set of task functions into kernel machines.
We consider a multi-task learning scheme, where multiple unary predicates on the feature space are to be learned by kernel machines.
A general approach is presented to convert the logic clauses into a continuous implementation, that processes the outputs computed by the kernel-based predicates.
arXiv Detail & Related papers (2024-02-16T12:11:34Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Efficient Knowledge Compilation Beyond Weighted Model Counting [7.828647825246474]
We introduce Second Level Algebraic Model Counting (2AMC) as a generic framework for these kinds of problems.
First level techniques based on Knowledge Compilation (KC) have been adapted for specific 2AMC instances by imposing variable order constraints.
We show that we can exploit the logical structure of a 2AMC problem to omit parts of these constraints, thus limiting the negative effect.
arXiv Detail & Related papers (2022-05-16T08:10:40Z) - Instance-Dependent Confidence and Early Stopping for Reinforcement
Learning [99.57168572237421]
Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure.
This research provides guarantees that explain textitex post the performance differences observed.
A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice.
arXiv Detail & Related papers (2022-01-21T04:25:35Z) - Lifting Symmetry Breaking Constraints with Inductive Logic Programming [2.036811219647753]
We introduce a new model-oriented approach for Answer Set Programming that lifts the Symmetry Breaking Constraints into a set of interpretable first-order constraints.
Experiments demonstrate the ability of our framework to learn general constraints from instance-specific SBCs.
arXiv Detail & Related papers (2021-12-22T11:27:48Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - An Information Bottleneck Approach for Controlling Conciseness in
Rationale Extraction [84.49035467829819]
We show that it is possible to better manage this trade-off by optimizing a bound on the Information Bottleneck (IB) objective.
Our fully unsupervised approach jointly learns an explainer that predicts sparse binary masks over sentences, and an end-task predictor that considers only the extracted rationale.
arXiv Detail & Related papers (2020-05-01T23:26:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.