Foundations of Symbolic Languages for Model Interpretability
- URL: http://arxiv.org/abs/2110.02376v1
- Date: Tue, 5 Oct 2021 21:56:52 GMT
- Title: Foundations of Symbolic Languages for Model Interpretability
- Authors: Marcelo Arenas, Daniel Baez, Pablo Barcel\'o, Jorge P\'erez and
Bernardo Subercaseaux
- Abstract summary: We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable.
We present a prototype implementation of FOIL wrapped in a high-level declarative language.
- Score: 2.3361634876233817
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several queries and scores have recently been proposed to explain individual
predictions over ML models. Given the need for flexible, reliable, and
easy-to-apply interpretability methods for ML models, we foresee the need for
developing declarative languages to naturally specify different explainability
queries. We do this in a principled way by rooting such a language in a logic,
called FOIL, that allows for expressing many simple but important
explainability queries, and might serve as a core for more expressive
interpretability languages. We study the computational complexity of FOIL
queries over two classes of ML models often deemed to be easily interpretable:
decision trees and OBDDs. Since the number of possible inputs for an ML model
is exponential in its dimension, the tractability of the FOIL evaluation
problem is delicate but can be achieved by either restricting the structure of
the models or the fragment of FOIL being evaluated. We also present a prototype
implementation of FOIL wrapped in a high-level declarative language and perform
experiments showing that such a language can be used in practice.
Related papers
- Understanding and Mitigating Language Confusion in LLMs [76.96033035093204]
We evaluate 15 typologically diverse languages with existing and newly-created English and multilingual prompts.
We find that Llama Instruct and Mistral models exhibit high degrees of language confusion.
We find that language confusion can be partially mitigated via few-shot prompting, multilingual SFT and preference tuning.
arXiv Detail & Related papers (2024-06-28T17:03:51Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - Evaluating Neural Language Models as Cognitive Models of Language
Acquisition [4.779196219827507]
We argue that some of the most prominent benchmarks for evaluating the syntactic capacities of neural language models may not be sufficiently rigorous.
When trained on small-scale data modeling child language acquisition, the LMs can be readily matched by simple baseline models.
We conclude with suggestions for better connecting LMs with the empirical study of child language acquisition.
arXiv Detail & Related papers (2023-10-31T00:16:17Z) - ThinkSum: Probabilistic reasoning over sets using large language models [18.123895485602244]
We propose a two-stage probabilistic inference paradigm, ThinkSum, which reasons over sets of objects or facts in a structured manner.
We demonstrate the possibilities and advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks.
arXiv Detail & Related papers (2022-10-04T00:34:01Z) - Interpreting Language Models with Contrastive Explanations [99.7035899290924]
Language models must consider various features to predict a token, such as its part of speech, number, tense, or semantics.
Existing explanation methods conflate evidence for all these features into a single explanation, which is less interpretable for human understanding.
We show that contrastive explanations are quantifiably better than non-contrastive explanations in verifying major grammatical phenomena.
arXiv Detail & Related papers (2022-02-21T18:32:24Z) - Flexible Operations for Natural Language Deduction [32.92866195461153]
ParaPattern is a method for building models to generate logical transformations of diverse natural language inputs without direct human supervision.
We use a BART-based model to generate the result of applying a particular logical operation to one or more premise statements.
We evaluate our models using targeted contrast sets as well as out-of-domain sentence compositions from the QASC dataset.
arXiv Detail & Related papers (2021-04-18T11:36:26Z) - Explicitly Modeling Syntax in Language Models with Incremental Parsing
and a Dynamic Oracle [88.65264818967489]
We propose a new syntax-aware language model: Syntactic Ordered Memory (SOM)
The model explicitly models the structure with an incremental and maintains the conditional probability setting of a standard language model.
Experiments show that SOM can achieve strong results in language modeling, incremental parsing and syntactic generalization tests.
arXiv Detail & Related papers (2020-10-21T17:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.