Active Observer Visual Problem-Solving Methods are Dynamically
Hypothesized, Deployed and Tested
- URL: http://arxiv.org/abs/2108.08145v1
- Date: Wed, 18 Aug 2021 13:33:07 GMT
- Title: Active Observer Visual Problem-Solving Methods are Dynamically
Hypothesized, Deployed and Tested
- Authors: Markus D. Solbach and John K. Tsotsos
- Abstract summary: We show that humans exhibit a variety of problem-solving strategies whose breadth and complexity are surprising and not easily handled by current methodologies.
Results inform our Cognitive Program representation of STAR extending its relevance to real-world tasks.
- Score: 17.58979205709865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The STAR architecture was designed to test the value of the full Selective
Tuning model of visual attention for complex real-world visuospatial tasks and
behaviors. However, knowledge of how humans solve such tasks in 3D as active
observers is lean. We thus devised a novel experimental setup and examined such
behavior. We discovered that humans exhibit a variety of problem-solving
strategies whose breadth and complexity are surprising and not easily handled
by current methodologies. It is apparent that solution methods are dynamically
composed by hypothesizing sequences of actions, testing them, and if they fail,
trying different ones. The importance of active observation is striking as is
the lack of any learning effect. These results inform our Cognitive Program
representation of STAR extending its relevance to real-world tasks.
Related papers
- BEAC: Imitating Complex Exploration and Task-oriented Behaviors for Invisible Object Nonprehensile Manipulation [9.453992660558256]
This paper proposes a novel imitation learning framework called Belief Exploration-Action Cloning (BEAC)
Our proposed method achieved the best task performance, higher mode and action prediction accuracies, while reducing the cognitive load in a user study.
arXiv Detail & Related papers (2025-03-21T02:26:14Z) - Robust Visual Imitation Learning with Inverse Dynamics Representations [32.806294517277976]
We develop an inverse dynamics state representation learning objective to align the expert environment and the learning environment.
With the abstract state representation, we design an effective reward function, which thoroughly measures the similarity between behavior data and expert data.
Our approach can achieve a near-expert performance in most environments, and significantly outperforms the state-of-the-art visual IL methods and robust IL methods.
arXiv Detail & Related papers (2023-10-22T11:47:35Z) - SeMAIL: Eliminating Distractors in Visual Imitation via Separated Models [22.472167814814448]
We propose a new model-based imitation learning algorithm named Separated Model-based Adversarial Imitation Learning (SeMAIL)
Our method achieves near-expert performance on various visual control tasks with complex observations and the more challenging tasks with different backgrounds from expert observations.
arXiv Detail & Related papers (2023-06-19T04:33:44Z) - Discovering Individual Rewards in Collective Behavior through Inverse
Multi-Agent Reinforcement Learning [3.4437947384641032]
We introduce an off-policy inverse multi-agent reinforcement learning algorithm (IMARL)
By leveraging demonstrations, our algorithm automatically uncovers the reward function and learns an effective policy for the agents.
The proposed IMARL algorithm is a significant step towards understanding collective dynamics from the perspective of its constituents.
arXiv Detail & Related papers (2023-05-17T20:07:30Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Imitation from Observation With Bootstrapped Contrastive Learning [12.048166025000976]
Imitation from observation (IfO) is a learning paradigm that consists of training autonomous agents in a Markov Decision Process.
We present BootIfOL, an IfO algorithm that aims to learn a reward function that takes an agent trajectory and compares it to an expert.
We evaluate our approach on a variety of control tasks showing that we can train effective policies using a limited number of demonstrative trajectories.
arXiv Detail & Related papers (2023-02-13T17:32:17Z) - Inferring Versatile Behavior from Demonstrations by Matching Geometric
Descriptors [72.62423312645953]
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps.
Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting.
Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility.
arXiv Detail & Related papers (2022-10-17T16:42:59Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Seeing Differently, Acting Similarly: Imitation Learning with
Heterogeneous Observations [126.78199124026398]
In many real-world imitation learning tasks, the demonstrator and the learner have to act in different but full observation spaces.
In this work, we model the above learning problem as Heterogeneous Observations Learning (HOIL)
We propose the Importance Weighting with REjection (IWRE) algorithm based on the techniques of importance-weighting, learning with rejection, and active querying to solve the key challenge of occupancy measure matching.
arXiv Detail & Related papers (2021-06-17T05:44:04Z) - Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning [92.05556163518999]
MARL exacerbates matters by imposing various constraints on communication and observability.
For value-based methods, it poses challenges in accurately representing the optimal value function.
For policy gradient methods, it makes training the critic difficult and exacerbates the problem of the lagging critic.
We show that from a learning theory perspective, both problems can be addressed by accurately representing the associated action-value function.
arXiv Detail & Related papers (2021-05-31T23:08:05Z) - Planning to Explore via Self-Supervised World Models [120.31359262226758]
Plan2Explore is a self-supervised reinforcement learning agent.
We present a new approach to self-supervised exploration and fast adaptation to new tasks.
Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods.
arXiv Detail & Related papers (2020-05-12T17:59:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.