Building Trustworthy AI by Addressing its 16+2 Desiderata with Goal-Directed Commonsense Reasoning
- URL: http://arxiv.org/abs/2506.12667v1
- Date: Sun, 15 Jun 2025 00:09:12 GMT
- Title: Building Trustworthy AI by Addressing its 16+2 Desiderata with Goal-Directed Commonsense Reasoning
- Authors: Alexis R. Tudor, Yankai Zeng, Huaduo Wang, Joaquin Arias, Gopal Gupta,
- Abstract summary: Sub-symbolic machine learning algorithms simulate reasoning but hallucinate.<n>Rule-based reasoners are able to provide the chain of reasoning steps but are complex and use a large number of reasoners.<n>We propose s(CASP), a goal-directed constraint-based answer set programming reasoner.
- Score: 1.1584245758108584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current advances in AI and its applicability have highlighted the need to ensure its trustworthiness for legal, ethical, and even commercial reasons. Sub-symbolic machine learning algorithms, such as the LLMs, simulate reasoning but hallucinate and their decisions cannot be explained or audited (crucial aspects for trustworthiness). On the other hand, rule-based reasoners, such as Cyc, are able to provide the chain of reasoning steps but are complex and use a large number of reasoners. We propose a middle ground using s(CASP), a goal-directed constraint-based answer set programming reasoner that employs a small number of mechanisms to emulate reliable and explainable human-style commonsense reasoning. In this paper, we explain how s(CASP) supports the 16 desiderata for trustworthy AI introduced by Doug Lenat and Gary Marcus (2023), and two additional ones: inconsistency detection and the assumption of alternative worlds. To illustrate the feasibility and synergies of s(CASP), we present a range of diverse applications, including a conversational chatbot and a virtually embodied reasoner.
Related papers
- Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor [83.99510317617694]
We argue that a broader conception of what rigorous AI research and practice should entail is needed.<n>We aim to provide useful language and a framework for much-needed dialogue about the AI community's work.
arXiv Detail & Related papers (2025-06-17T15:44:41Z) - Towards A Litmus Test for Common Sense [5.280511830552275]
This paper is the second in a planned series aimed at envisioning a path to safe and beneficial artificial intelligence.<n>We propose a more formal litmus test for common sense, adopting an axiomatic approach that combines minimal prior knowledge constraints with diagonal or Godel-style arguments.
arXiv Detail & Related papers (2025-01-17T02:02:12Z) - Artificial Expert Intelligence through PAC-reasoning [21.91294369791479]
Artificial Expert Intelligence (AEI) seeks to transcend the limitations of both Artificial General Intelligence (AGI) and narrow AI.<n>AEI seeks to integrate domain-specific expertise with critical, precise reasoning capabilities akin to those of top human experts.
arXiv Detail & Related papers (2024-12-03T13:25:18Z) - Distilling Reasoning Ability from Large Language Models with Adaptive Thinking [54.047761094420174]
Chain of thought finetuning (cot-finetuning) aims to endow small language models (SLM) with reasoning ability to improve their performance towards specific tasks.
Most existing cot-finetuning methods adopt a pre-thinking mechanism, allowing the SLM to generate a rationale before providing an answer.
This mechanism enables SLM to analyze and think about complex questions, but it also makes answer correctness highly sensitive to minor errors in rationale.
We propose a robust post-thinking mechanism to generate answers before rationale.
arXiv Detail & Related papers (2024-04-14T07:19:27Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [55.66353783572259]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.<n>Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - The Case Against Explainability [8.991619150027264]
We show end-user Explainability's inadequacy to fulfil reason-giving's role in law.
We find that end-user Explainability excels in the fourth function, a quality which raises serious risks.
This study calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability.
arXiv Detail & Related papers (2023-05-20T10:56:19Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z) - Reasonable Machines: A Research Manifesto [0.0]
A sound ecosystem of trust requires ways for autonomously justify their actions.
Building on social reasoning models such as moral and legal philosophy.
Enabling normative communication creates trust and opens new dimensions of AI application.
arXiv Detail & Related papers (2020-08-14T08:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.