Artificial Expert Intelligence through PAC-reasoning
- URL: http://arxiv.org/abs/2412.02441v1
- Date: Tue, 03 Dec 2024 13:25:18 GMT
- Title: Artificial Expert Intelligence through PAC-reasoning
- Authors: Shai Shalev-Shwartz, Amnon Shashua, Gal Beniamini, Yoav Levine, Or Sharir, Noam Wies, Ido Ben-Shaul, Tomer Nussbaum, Shir Granot Peled,
- Abstract summary: Artificial Expert Intelligence (AEI) seeks to transcend the limitations of both Artificial General Intelligence (AGI) and narrow AI.
AEI seeks to integrate domain-specific expertise with critical, precise reasoning capabilities akin to those of top human experts.
- Score: 21.91294369791479
- License:
- Abstract: Artificial Expert Intelligence (AEI) seeks to transcend the limitations of both Artificial General Intelligence (AGI) and narrow AI by integrating domain-specific expertise with critical, precise reasoning capabilities akin to those of top human experts. Existing AI systems often excel at predefined tasks but struggle with adaptability and precision in novel problem-solving. To overcome this, AEI introduces a framework for ``Probably Approximately Correct (PAC) Reasoning". This paradigm provides robust theoretical guarantees for reliably decomposing complex problems, with a practical mechanism for controlling reasoning precision. In reference to the division of human thought into System 1 for intuitive thinking and System 2 for reflective reasoning~\citep{tversky1974judgment}, we refer to this new type of reasoning as System 3 for precise reasoning, inspired by the rigor of the scientific method. AEI thus establishes a foundation for error-bounded, inference-time learning.
Related papers
- Towards A Litmus Test for Common Sense [5.280511830552275]
This paper is the second in a planned series aimed at envisioning a path to safe and beneficial artificial intelligence.
We propose a more formal litmus test for common sense, adopting an axiomatic approach that combines minimal prior knowledge constraints with diagonal or Godel-style arguments.
arXiv Detail & Related papers (2025-01-17T02:02:12Z) - Common Sense Is All You Need [5.280511830552275]
Artificial intelligence (AI) has made significant strides in recent years, yet it continues to struggle with a fundamental aspect of cognition present in all animals: common sense.
Current AI systems often lack the ability to adapt to new situations without extensive prior knowledge.
This manuscript argues that integrating common sense into AI systems is essential for achieving true autonomy and unlocking the full societal and commercial value of AI.
arXiv Detail & Related papers (2025-01-11T21:23:41Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Applications of Explainable artificial intelligence in Earth system science [12.454478986296152]
This review aims to provide a foundational understanding of explainable AI (XAI)
XAI offers a set of powerful tools that make the models more transparent.
We identify four significant challenges that XAI faces within the Earth system science (ESS)
A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations.
arXiv Detail & Related papers (2024-06-12T15:05:29Z) - Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption [0.2209921757303168]
We propose a novel program of reasoning for artificial intelligence (AI)
We show that AIs manifest an adaptive balancing of precision and efficiency, consistent with principles of resource-rational human cognition.
Our findings reveal a nuanced picture of AI cognition, where trade-offs between resources and objectives lead to the emulation of biological systems.
arXiv Detail & Related papers (2024-03-14T13:53:05Z) - XXAI: Towards eXplicitly eXplainable Artificial Intelligence [0.0]
There are concerns about the reliability and safety of artificial intelligence based on sub-symbolic neural networks.
symbolic AI has the nature of a white box and is able to ensure the reliability and safety of its decisions.
We propose eXplicitly eXplainable AI (XXAI) - a fully transparent white-box AI based on deterministic logical cellular automata.
arXiv Detail & Related papers (2024-01-05T23:50:10Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - From LSAT: The Progress and Challenges of Complex Reasoning [56.07448735248901]
We study the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension.
We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests.
arXiv Detail & Related papers (2021-08-02T05:43:03Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.