Hashigo: A Next Generation Sketch Interactive System for Japanese Kanji
- URL: http://arxiv.org/abs/2504.13940v1
- Date: Tue, 15 Apr 2025 18:37:28 GMT
- Title: Hashigo: A Next Generation Sketch Interactive System for Japanese Kanji
- Authors: Paul Taele, Tracy Hammond,
- Abstract summary: Hashigo is a sketch interactive system which achieves human instructor-level critique and feedback on both the visual structure and written technique.<n>This type of automated critique and feedback allows students to target and correct specific deficiencies in their sketches.
- Score: 6.45586946263398
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Language students can increase their effectiveness in learning written Japanese by mastering the visual structure and written technique of Japanese kanji. Yet, existing kanji handwriting recognition systems do not assess the written technique sufficiently enough to discourage students from developing bad learning habits. In this paper, we describe our work on Hashigo, a kanji sketch interactive system which achieves human instructor-level critique and feedback on both the visual structure and written technique of students' sketched kanji. This type of automated critique and feedback allows students to target and correct specific deficiencies in their sketches that, if left untreated, are detrimental to effective long-term kanji learning.
Related papers
- Kanji Workbook: A Writing-Based Intelligent Tutoring System for Learning Proper Japanese Kanji Writing Technique with Instructor-Emulated Assessment [7.676329008076931]
Kanji script writing is often introduced to novice Japanese foreign language students for achieving Japanese writing mastery.
Instructors often introduce various pedagogical methods -- such as visual structure and written techniques -- to assist students in kanji study.
Current educational applications are also limited due to lacking richer instructor-emulated feedback.
arXiv Detail & Related papers (2025-04-04T19:59:27Z) - Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training [68.41837295318152]
Diffusion-based text-to-image models have demonstrated impressive achievements in diversity and aesthetics but struggle to generate images with visual texts.
Existing backbone models have limitations such as misspelling, failing to generate texts, and lack of support for Chinese text.
We propose a series of methods, aiming to empower backbone models to generate visual texts in English and Chinese.
arXiv Detail & Related papers (2024-10-06T10:25:39Z) - Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language Recognition [96.62264528407863]
We propose a self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency.
Inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling.
Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin.
arXiv Detail & Related papers (2024-06-15T04:50:19Z) - MetaScript: Few-Shot Handwritten Chinese Content Generation via
Generative Adversarial Networks [15.037121719502606]
We propose MetaScript, a novel content generation system designed to address the diminishing presence of personal handwriting styles in the digital representation of Chinese characters.
Our approach harnesses the power of few-shot learning to generate Chinese characters that retain the individual's unique handwriting style and maintain the efficiency of digital typing.
arXiv Detail & Related papers (2023-12-25T17:31:19Z) - Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning [66.79173000135717]
We apply this work to teaching two Indian languages, Kannada and Marathi, which do not have well-developed resources for second language learning.
We extract descriptions from a natural text corpus that answer questions about morphosyntax (learning of word order, agreement, case marking, or word formation) and semantics (learning of vocabulary).
We enlist the help of language educators from schools in North America to perform a manual evaluation, who find the materials have potential to be used for their lesson preparation and learner evaluation.
arXiv Detail & Related papers (2023-10-27T18:17:29Z) - Recognition of Handwritten Japanese Characters Using Ensemble of
Convolutional Neural Networks [0.17646262965516946]
The study used an ensemble of three convolutional neural networks (CNNs) for recognizing handwritten Kanji characters.
The results indicate feasibility of using proposed CNN-ensemble architecture for recognizing handwritten characters.
arXiv Detail & Related papers (2023-06-06T18:30:51Z) - User Adaptive Language Learning Chatbots with a Curriculum [55.63893493019025]
We adapt lexically constrained decoding to a dialog system, which urges the dialog system to include curriculum-aligned words and phrases in its generated utterances.
The evaluation result demonstrates that the dialog system with curriculum infusion improves students' understanding of target words and increases their interest in practicing English.
arXiv Detail & Related papers (2023-04-11T20:41:41Z) - UIT-HWDB: Using Transferring Method to Construct A Novel Benchmark for
Evaluating Unconstrained Handwriting Image Recognition in Vietnamese [2.8360662552057323]
In Vietnamese, besides the modern Latin characters, there are accent and letter marks together with characters that draw confusion to state-of-the-art handwriting recognition methods.
As a low-resource language, there are not many datasets for researching handwriting recognition in Vietnamese.
Recent works evaluated offline handwriting recognition methods in Vietnamese using images from an online handwriting dataset constructed by connecting pen stroke coordinates without further processing.
This paper proposes the Transferring method to construct a handwriting image dataset that associates crucial natural attributes required for offline handwriting images.
arXiv Detail & Related papers (2022-11-10T08:23:54Z) - Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning [91.49622922938681]
We present an automatic framework that automatically discovers and visualizing descriptions of different aspects of grammar.
Specifically, we extract descriptions from a natural text corpus that answer questions about morphosyntax and semantics.
We apply this method for teaching the Indian languages, Kannada and Marathi, which, unlike English, do not have well-developed pedagogical resources.
arXiv Detail & Related papers (2022-06-10T14:52:22Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Morphological Analysis of Japanese Hiragana Sentences using the BI-LSTM
CRF Model [0.0]
This study proposes a method to develop neural models of the morphological analyzer for Japanese Hiragana sentences.
Morphological analysis is a technique that divides text data into words and assigns information such as parts of speech.
arXiv Detail & Related papers (2022-01-10T14:36:06Z) - Automated Transcription for Pre-Modern Japanese Kuzushiji Documents by
Random Lines Erasure and Curriculum Learning [6.700873164609009]
Most of the previous methods divided the recognition process into character segmentation and recognition.
In this paper, we enlarge our previous humaninspired recognition system from multiple lines to the full-page of Kuzushiji documents.
For the lack of training data, we propose a random text line erasure approach that randomly erases text lines and distorts documents.
arXiv Detail & Related papers (2020-05-06T09:17:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.