A Two-Systems Perspective for Computational Thinking
- URL: http://arxiv.org/abs/2012.03201v1
- Date: Sun, 6 Dec 2020 07:33:45 GMT
- Title: A Two-Systems Perspective for Computational Thinking
- Authors: Arvind W Kiwelekar, Swanand Navandar, Dharmendra K. Yadav
- Abstract summary: This paper suggests adopting Kahneman's two-systems model as a framework to understand the computational thought process.
The potential benefits of adopting Kahneman's two-systems perspective are that it helps us to fix the biases that cause errors in our reasoning.
- Score: 2.4149105714758545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational Thinking (CT) has emerged as one of the vital thinking skills
in recent times, especially for Science, Technology, Engineering and Management
(STEM) graduates. Educators are in search of underlying cognitive models
against which CT can be analyzed and evaluated. This paper suggests adopting
Kahneman's two-systems model as a framework to understand the computational
thought process. Kahneman's two-systems model postulates that human thinking
happens at two levels, i.e. fast and slow thinking. This paper illustrates
through examples that CT activities can be represented and analyzed using
Kahneman's two-systems model. The potential benefits of adopting Kahneman's
two-systems perspective are that it helps us to fix the biases that cause
errors in our reasoning. Further, it also provides a set of heuristics to speed
up reasoning activities.
Related papers
- Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Probabilistic Results on the Architecture of Mathematical Reasoning
Aligned by Cognitive Alternation [2.034092665105039]
We envision a machine capable of solving mathematical problems.
Dividing the quantitative reasoning system into two parts: thought processes and cognitive processes, we provide probabilistic descriptions of the architecture.
arXiv Detail & Related papers (2023-08-17T00:35:11Z) - Duality Principle and Biologically Plausible Learning: Connecting the
Representer Theorem and Hebbian Learning [15.094554860151103]
We argue that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.
Our work sheds light on the pivotal role of the Representer theorem in advancing our comprehension of neural computation.
arXiv Detail & Related papers (2023-08-02T20:21:18Z) - Clarifying System 1 & 2 through the Common Model of Cognition [0.0]
We use the Common Model of Cognition to ground System-1 and System-2.
We aim to clarify their underlying mechanisms, persisting misconceptions, and implications for metacognition.
arXiv Detail & Related papers (2023-05-18T02:25:03Z) - Investigating the Role of Centering Theory in the Context of Neural
Coreference Resolution Systems [71.57556446474486]
We investigate the connection between centering theory and modern coreference resolution systems.
We show that high-quality neural coreference resolvers may not benefit much from explicitly modeling centering ideas.
We formulate a version of CT that also models recency and show that it captures coreference information better compared to vanilla CT.
arXiv Detail & Related papers (2022-10-26T12:55:26Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Deep Learning Techniques for Inverse Problems in Imaging [102.30524824234264]
Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems.
We present a taxonomy that can be used to categorize different problems and reconstruction methods.
arXiv Detail & Related papers (2020-05-12T18:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.