Computational-level Analysis of Constraint Compliance for General
Intelligence
- URL: http://arxiv.org/abs/2303.04352v3
- Date: Thu, 15 Jun 2023 15:03:11 GMT
- Title: Computational-level Analysis of Constraint Compliance for General
Intelligence
- Authors: Robert E. Wray, Steven J. Jones, John E. Laird
- Abstract summary: Rules, manners' laws, and moral imperatives are examples of classes of constraints that govern human behavior.
Despite such messiness, humans incorporate constraints in their decisions robustly and rapidly.
General, artificially-intelligent agents must also be able to navigate the messiness of systems of real-world constraints.
- Score: 4.383011485317949
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human behavior is conditioned by codes and norms that constrain action.
Rules, ``manners,'' laws, and moral imperatives are examples of classes of
constraints that govern human behavior. These systems of constraints are
"messy:" individual constraints are often poorly defined, what constraints are
relevant in a particular situation may be unknown or ambiguous, constraints
interact and conflict with one another, and determining how to act within the
bounds of the relevant constraints may be a significant challenge, especially
when rapid decisions are needed. Despite such messiness, humans incorporate
constraints in their decisions robustly and rapidly. General,
artificially-intelligent agents must also be able to navigate the messiness of
systems of real-world constraints in order to behave predictability and
reliably. In this paper, we characterize sources of complexity in constraint
processing for general agents and describe a computational-level analysis for
such constraint compliance. We identify key algorithmic requirements based on
the computational-level analysis and outline an initial, exploratory
implementation of a general approach to constraint compliance.
Related papers
- Normative Requirements Operationalization with Large Language Models [3.456725053685842]
Normative non-functional requirements specify constraints that a system must observe in order to avoid violations of social, legal, ethical, empathetic, and cultural norms.
Recent research has tackled this challenge using a domain-specific language to specify normative requirements.
We propose a complementary approach that uses Large Language Models to extract semantic relationships between abstract representations of system capabilities.
arXiv Detail & Related papers (2024-04-18T17:01:34Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Maximum Causal Entropy Inverse Constrained Reinforcement Learning [3.409089945290584]
We propose a novel method that utilizes the principle of maximum causal entropy to learn constraints and an optimal policy.
We evaluate the effectiveness of the learned policy by assessing the reward received and the number of constraint violations.
Our method has been shown to outperform state-of-the-art approaches across a variety of tasks and environments.
arXiv Detail & Related papers (2023-05-04T14:18:19Z) - Recursive Constraints to Prevent Instability in Constrained
Reinforcement Learning [16.019477271828745]
We consider the challenge of finding a deterministic policy for a Markov decision process.
This class of problem is known to be hard, but the combined requirements of determinism and uniform optimality can create learning instability.
We present a suitable constrained reinforcement learning algorithm that prevents learning instability.
arXiv Detail & Related papers (2022-01-20T02:33:24Z) - Maximum Likelihood Constraint Inference from Stochastic Demonstrations [5.254702845143088]
This paper extends maximum likelihood constraint inference to applications by using maximum causal entropy likelihoods.
We propose an efficient algorithm that computes constraint likelihood and risk tolerance in a unified Bellman backup.
arXiv Detail & Related papers (2021-02-24T20:46:55Z) - CoreDiag: Eliminating Redundancy in Constraint Sets [68.8204255655161]
We present a new algorithm which can be exploited for the determination of minimal cores (minimal non-redundant constraint sets)
The algorithm is especially useful for distributed knowledge engineering scenarios where the degree of redundancy can become high.
In order to show the applicability of our approach, we present an empirical study conducted with commercial configuration knowledge bases.
arXiv Detail & Related papers (2021-02-24T09:16:10Z) - An Efficient Diagnosis Algorithm for Inconsistent Constraint Sets [68.8204255655161]
We introduce a divide-and-conquer based diagnosis algorithm (FastDiag) which identifies minimal sets of faulty constraints in an over-constrained problem.
We compare FastDiag with the conflict-directed calculation of hitting sets and present an in-depth performance analysis.
arXiv Detail & Related papers (2021-02-17T19:55:42Z) - Deep reinforcement learning driven inspection and maintenance planning
under incomplete information and constraints [0.0]
Determination of inspection and maintenance policies constitutes a complex optimization problem.
In this work, these challenges are addressed within a joint framework of constrained Partially Observable Decision Processes (POMDP) and multi-agent Deep Reinforcement Learning (DRL)
The proposed framework is found to outperform well-established policy baselines and facilitate adept prescription of inspection and intervention actions.
arXiv Detail & Related papers (2020-07-02T20:44:07Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.