Gamifying Math Education using Object Detection
- URL: http://arxiv.org/abs/2304.06270v1
- Date: Thu, 13 Apr 2023 05:06:33 GMT
- Title: Gamifying Math Education using Object Detection
- Authors: Yueqiu Sun, Rohitkrishna Nambiar and Vivek Vidyasagaran
- Abstract summary: We present a curriculum inspired teaching system for kids aged 5-8 to learn geometry using shape tile manipulatives.
This introduces a challenge of oriented object detection for densely packed objects with arbitrary orientations.
We enable our system to understand user interactions and provide real-time audiovisual feedback.
- Score: 0.696125353550498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manipulatives used in the right way help improve mathematical concepts
leading to better learning outcomes. In this paper, we present a phygital
(physical + digital) curriculum inspired teaching system for kids aged 5-8 to
learn geometry using shape tile manipulatives. Combining smaller shapes to form
larger ones is an important skill kids learn early on which requires shape
tiles to be placed close to each other in the play area. This introduces a
challenge of oriented object detection for densely packed objects with
arbitrary orientations. Leveraging simulated data for neural network training
and light-weight mobile architectures, we enable our system to understand user
interactions and provide real-time audiovisual feedback. Experimental results
show that our network runs real-time with high precision/recall on consumer
devices, thereby providing a consistent and enjoyable learning experience.
Related papers
- Curriculum Is More Influential Than Haptic Information During Reinforcement Learning of Object Manipulation Against Gravity [0.0]
Learning to lift and rotate objects with the fingertips is necessary for autonomous in-hand dexterous manipulation.
We investigate the role of curriculum learning and haptic feedback in enabling the learning of dexterous manipulation.
arXiv Detail & Related papers (2024-07-13T19:23:11Z) - SweepNet: Unsupervised Learning Shape Abstraction via Neural Sweepers [18.9832388952668]
We introduce papername, a novel approach to shape abstraction through sweep surfaces.
We propose an effective parameterization for sweep surfaces, utilizing superellipses for profile representation and B-spline curves for the axis.
By introducing a differentiable neural sweeper and an encoder-decoder architecture, we demonstrate the ability to predict sweep surface representations without supervision.
arXiv Detail & Related papers (2024-07-08T18:18:17Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - Playful Interactions for Representation Learning [82.59215739257104]
We propose to use playful interactions in a self-supervised manner to learn visual representations for downstream tasks.
We collect 2 hours of playful data in 19 diverse environments and use self-predictive learning to extract visual representations.
Our representations generalize better than standard behavior cloning and can achieve similar performance with only half the number of required demonstrations.
arXiv Detail & Related papers (2021-07-19T17:54:48Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Using Shape to Categorize: Low-Shot Learning with an Explicit Shape Bias [22.863686803150625]
We investigate how reasoning about 3D shape can be used to improve low-shot learning methods' generalization performance.
We propose a new way to improve existing low-shot learning approaches by learning a discriminative embedding space using 3D object shape.
We also develop Toys4K, a new 3D object dataset with the biggest number of object categories that can also support low-shot learning.
arXiv Detail & Related papers (2021-01-18T19:29:41Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z) - Learning Rope Manipulation Policies Using Dense Object Descriptors
Trained on Synthetic Depth Data [32.936908766549344]
We present an approach that learns point-pair correspondences between initial and goal rope configurations.
In 50 trials of a knot-tying task with the ABB YuMi Robot, the system achieves a 66% knot-tying success rate from previously unseen configurations.
arXiv Detail & Related papers (2020-03-03T23:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.