IMLI: An Incremental Framework for MaxSAT-Based Learning of
Interpretable Classification Rules
- URL: http://arxiv.org/abs/2001.01891v1
- Date: Tue, 7 Jan 2020 05:03:53 GMT
- Title: IMLI: An Incremental Framework for MaxSAT-Based Learning of
Interpretable Classification Rules
- Authors: Bishwamittra Ghosh and Kuldeep S. Meel
- Abstract summary: We propose IMLI: an incremental approach to MaxSAT based framework that achieves scalable runtime performance.
IMLI achieves up to three orders of magnitude runtime improvement without loss of accuracy and interpretability.
- Score: 40.497133083839664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The wide adoption of machine learning in the critical domains such as medical
diagnosis, law, education had propelled the need for interpretable techniques
due to the need for end users to understand the reasoning behind decisions due
to learning systems. The computational intractability of interpretable learning
led practitioners to design heuristic techniques, which fail to provide sound
handles to tradeoff accuracy and interpretability.
Motivated by the success of MaxSAT solvers over the past decade, recently
MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the
problem of learning interpretable rules expressed in Conjunctive Normal Form
(CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to
that of other state of the art black-box classifiers while generating small
interpretable CNF formulas, the runtime performance of MLIC is significantly
lagging and renders approach unusable in practice. In this context, authors
raised the question: Is it possible to achieve the best of both worlds, i.e., a
sound framework for interpretable learning that can take advantage of MaxSAT
solvers while scaling to real-world instances?
In this paper, we take a step towards answering the above question in
affirmation. We propose IMLI: an incremental approach to MaxSAT based framework
that achieves scalable runtime performance via partition-based training
methodology. Extensive experiments on benchmarks arising from UCI repository
demonstrate that IMLI achieves up to three orders of magnitude runtime
improvement without loss of accuracy and interpretability.
Related papers
- An Incremental MaxSAT-based Model to Learn Interpretable and Balanced Classification Rules [0.0]
This work aims to propose an incremental model for learning interpretable and balanced rules based on MaxSAT.
The approach based on MaxSAT, called IMLI, presents a technique to increase performance that involves learning a set of rules by incrementally applying the model in a dataset.
arXiv Detail & Related papers (2024-03-25T04:43:47Z) - torchmSAT: A GPU-Accelerated Approximation To The Maximum Satisfiability
Problem [1.5850859526672516]
We derive a single differentiable function capable of approximating solutions for the Maximum Satisfiability Problem (MaxSAT)
We present a novel neural network architecture to model our differentiable function, and progressively solve MaxSAT using backpropagation.
arXiv Detail & Related papers (2024-02-06T02:33:00Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Interpretable Anomaly Detection via Discrete Optimization [1.7150329136228712]
We propose a framework for learning inherently interpretable anomaly detectors from sequential data.
We show that this problem is computationally hard and develop two learning algorithms based on constraint optimization.
Using a prototype implementation, we demonstrate that our approach shows promising results in terms of accuracy and F1 score.
arXiv Detail & Related papers (2023-03-24T16:19:15Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Efficient Learning of Interpretable Classification Rules [34.27987659227838]
This paper contributes an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for classification rules expressible in proposition logic.
In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability.
arXiv Detail & Related papers (2022-05-14T00:36:38Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Learning Implicitly with Noisy Data in Linear Arithmetic [94.66549436482306]
We extend implicit learning in PAC-Semantics to handle intervals and threshold uncertainty in the language of linear arithmetic.
We show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.
arXiv Detail & Related papers (2020-10-23T19:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.