HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving
- URL: http://arxiv.org/abs/2102.11344v1
- Date: Mon, 22 Feb 2021 20:37:01 GMT
- Title: HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving
- Authors: Sirui Xie, Xiaojian Ma, Peiyu Yu, Yixin Zhu, Ying Nian Wu, Song-Chun
Zhu
- Abstract summary: Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
- Score: 104.79156980475686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans learn compositional and causal abstraction, \ie, knowledge, in
response to the structure of naturalistic tasks. When presented with a
problem-solving task involving some objects, toddlers would first interact with
these objects to reckon what they are and what can be done with them.
Leveraging these concepts, they could understand the internal structure of this
task, without seeing all of the problem instances. Remarkably, they further
build cognitively executable strategies to \emph{rapidly} solve novel problems.
To empower a learning agent with similar capability, we argue there shall be
three levels of generalization in how an agent represents its knowledge:
perceptual, conceptual, and algorithmic. In this paper, we devise the very
first systematic benchmark that offers joint evaluation covering all three
levels. This benchmark is centered around a novel task domain, HALMA, for
visual concept development and rapid problem-solving. Uniquely, HALMA has a
minimum yet complete concept space, upon which we introduce a novel paradigm to
rigorously diagnose and dissect learning agents' capability in understanding
and generalizing complex and structural concepts. We conduct extensive
experiments on reinforcement learning agents with various inductive biases and
carefully report their proficiency and weakness.
Related papers
- Discovering Conceptual Knowledge with Analytic Ontology Templates for Articulated Objects [42.9186628100765]
We aim to endow machine intelligence with an analogous capability through performing at the conceptual level.
AOT-driven approach yields benefits in three key perspectives.
arXiv Detail & Related papers (2024-09-18T04:53:38Z) - Brain in a Vat: On Missing Pieces Towards Artificial General
Intelligence in Large Language Models [83.63242931107638]
We propose four characteristics of generally intelligent agents.
We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations.
We conclude by outlining promising future research directions in the field of artificial general intelligence.
arXiv Detail & Related papers (2023-07-07T13:58:16Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - EgoTaskQA: Understanding Human Tasks in Egocentric Videos [89.9573084127155]
EgoTaskQA benchmark provides home for crucial dimensions of task understanding through question-answering on real-world egocentric videos.
We meticulously design questions that target the understanding of (1) action dependencies and effects, (2) intents and goals, and (3) agents' beliefs about others.
We evaluate state-of-the-art video reasoning models on our benchmark and show their significant gaps between humans in understanding complex goal-oriented egocentric videos.
arXiv Detail & Related papers (2022-10-08T05:49:05Z) - How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
Continual Learning and Functional Composition [26.524289609910653]
A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world.
Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters.
Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately.
arXiv Detail & Related papers (2022-07-15T19:53:20Z) - Systematic human learning and generalization from a brief tutorial with
explanatory feedback [3.7826494079172557]
We investigate human adults' ability to learn an abstract reasoning task based on Sudoku.
We find that participants who master the task do so within a small number of trials and generalize well to puzzles outside of the training range.
We also find that most of those who master the task can describe a valid solution strategy, and such participants perform better on transfer puzzles than those whose strategy descriptions are vague or incomplete.
arXiv Detail & Related papers (2021-07-10T00:14:41Z) - Computational principles of intelligence: learning and reasoning with
neural networks [0.0]
This work proposes a novel framework of intelligence based on three principles.
First, the generative and mirroring nature of learned representations of inputs.
Second, a grounded, intrinsically motivated and iterative process for learning, problem solving and imagination.
Third, an ad hoc tuning of the reasoning mechanism over causal compositional representations using inhibition rules.
arXiv Detail & Related papers (2020-12-17T10:03:26Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.