'Put the Car on the Stand': SMT-based Oracles for Investigating
Decisions
- URL: http://arxiv.org/abs/2305.05731v2
- Date: Mon, 29 Jan 2024 21:31:53 GMT
- Title: 'Put the Car on the Stand': SMT-based Oracles for Investigating
Decisions
- Authors: Samuel Judson and Matthew Elacqua and Filip Cano and Timos
Antonopoulos and Bettina K\"onighofer and Scott J. Shapiro and Ruzica Piskac
- Abstract summary: We show that under minimal assumptions automated reasoning can rigorously interrogate algorithmic behaviors.
We model accountability processes, such as trials or review boards, as Counterfactual-Guided Logic Exploration and Abstraction Refinement loops.
We implement our framework and demonstrate its utility on an illustrative car crash scenario.
- Score: 4.170056099990699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Principled accountability in the aftermath of harms is essential to the
trustworthy design and governance of algorithmic decision making. Legal theory
offers a paramount method for assessing culpability: putting the agent 'on the
stand' to subject their actions and intentions to cross-examination. We show
that under minimal assumptions automated reasoning can rigorously interrogate
algorithmic behaviors as in the adversarial process of legal fact finding. We
model accountability processes, such as trials or review boards, as
Counterfactual-Guided Logic Exploration and Abstraction Refinement (CLEAR)
loops. We use the formal methods of symbolic execution and satisfiability
modulo theories (SMT) solving to discharge queries about agent behavior in
factual and counterfactual scenarios, as adaptively formulated by a human
investigator. In order to do so, for a decision algorithm $\mathcal{A}$ we use
symbolic execution to represent its logic as a statement $\Pi$ in the decidable
theory $\texttt{QF_FPBV}$. We implement our framework and demonstrate its
utility on an illustrative car crash scenario.
Related papers
- FLARE: Faithful Logic-Aided Reasoning and Exploration [50.9814063216852]
We introduce a novel approach for traversing the problem space using task decompositions.
We use the Large Language Models to plan a solution, soft-formalise the query into facts and predicates using a logic programming code.
Our method allows us to compute the faithfulness of the reasoning process w.r.t. the generated code and analyse the steps of the multi-hop search without relying on external solvers.
arXiv Detail & Related papers (2024-10-14T19:39:11Z) - LogicGame: Benchmarking Rule-Based Reasoning Abilities of Large Language Models [87.49676980090555]
Large Language Models (LLMs) have demonstrated notable capabilities across various tasks, showcasing complex problem-solving abilities.
We introduce LogicGame, a novel benchmark designed to evaluate the comprehensive rule understanding, execution, and planning capabilities of LLMs.
arXiv Detail & Related papers (2024-08-28T13:16:41Z) - Demystifying Reinforcement Learning in Production Scheduling via Explainable AI [0.7515066610159392]
Deep Reinforcement Learning (DRL) is a frequently employed technique to solve scheduling problems.
Although DRL agents ace at delivering viable results in short computing times, their reasoning remains opaque.
We apply two explainable AI (xAI) frameworks to describe the reasoning behind scheduling decisions of a specialized DRL agent in a flow production.
arXiv Detail & Related papers (2024-08-19T09:39:01Z) - Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference [20.057611113206324]
We study how to subvert large language models (LLMs) from following prompt-specified rules.
We prove that although LLMs can faithfully follow such rules, maliciously crafted prompts can mislead even idealized, theoretically constructed models.
arXiv Detail & Related papers (2024-06-21T19:18:16Z) - Logic Rules as Explanations for Legal Case Retrieval [9.240902132139187]
We propose a framework that conducts reasoning on the matching of legal cases through learning case-level and law-level logic rules.
Benefiting from the logic and interpretable nature of the logic rules, NS-LCR is equipped with built-in faithful explainability.
arXiv Detail & Related papers (2024-03-03T09:22:21Z) - Automated legal reasoning with discretion to act using s(LAW) [0.294944680995069]
ethical and legal concerns make it necessary for automated reasoners to justify in human-understandable terms.
We propose to use s(CASP), a top-down execution model for predicate ASP, to model vague concepts following a set of patterns.
We have implemented a framework, called s(LAW), to model, reason, and justify the applicable legislation and validate it by translating (and benchmarking) a representative use case.
arXiv Detail & Related papers (2024-01-25T21:11:08Z) - OPERA:Operation-Pivoted Discrete Reasoning over Text [33.36388276371693]
OPERA is an operation-pivoted discrete reasoning framework for machine reading comprehension.
It uses lightweight symbolic operations as neural modules to facilitate the reasoning ability and interpretability.
Experiments on both DROP and RACENum datasets show the reasoning ability of OPERA.
arXiv Detail & Related papers (2022-04-29T15:41:47Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - On Exploiting Hitting Sets for Model Reconciliation [53.81101846598925]
In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal.
A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model.
We present a logic-based framework for model reconciliation that extends beyond the realm of planning.
arXiv Detail & Related papers (2020-12-16T21:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.