Towards an AI-Augmented Textbook
- URL: http://arxiv.org/abs/2509.13348v4
- Date: Tue, 30 Sep 2025 06:33:04 GMT
- Title: Towards an AI-Augmented Textbook
- Authors: LearnLM Team, Google, :, Alicia Martín, Amir Globerson, Amy Wang, Anirudh Shekhawat, Anna Iurchenko, Anisha Choudhury, Avinatan Hassidim, Ayça Çakmakli, Ayelet Shasha Evron, Charlie Yang, Courtney Heldreth, Diana Akrong, Gal Elidan, Hairong Mu, Ian Li, Ido Cohen, Katherine Chou, Komal Singh, Lev Borovoi, Lidan Hackmon, Lior Belinsky, Michael Fink, Niv Efron, Preeti Singh, Rena Levitt, Shashank Agarwal, Shay Sharon, Tracey Lee-Joe, Xiaohong Hao, Yael Gold-Zamir, Yael Haramaty, Yishay Mor, Yoav Bar Sinai, Yossi Matias,
- Abstract summary: We present an approach for transforming and augmenting textbooks using generative AI.<n>We refer to the system built with this approach as Learn Your Way.<n>We report pedagogical evaluations of the different transformations and augmentations, and present the results of a a randomized control trial.
- Score: 35.262145458142804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Textbooks are a cornerstone of education, but they have a fundamental limitation: they are a one-size-fits-all medium. Any new material or alternative representation requires arduous human effort, so that textbooks cannot be adapted in a scalable manner. We present an approach for transforming and augmenting textbooks using generative AI, adding layers of multiple representations and personalization while maintaining content integrity and quality. We refer to the system built with this approach as Learn Your Way. We report pedagogical evaluations of the different transformations and augmentations, and present the results of a a randomized control trial, highlighting the advantages of learning with Learn Your Way over regular textbook usage.
Related papers
- Prompting Forgetting: Unlearning in GANs via Textual Guidance [4.3562145620596215]
We propose Text-to-Unlearn, a novel framework that selectively unlearns concepts from pre-trained GANs using only text prompts.<n>Our approach guides the unlearning process without requiring additional datasets or supervised fine-tuning.<n>To our knowledge, Text-to-Unlearn is the first cross-modal unlearning framework for GANs.
arXiv Detail & Related papers (2025-04-01T22:18:40Z) - Adaptive Multi-Modality Prompt Learning [21.86784369327551]
We propose an adaptive multi-modality prompt learning to address the above issues.
The image prompt learning achieves in-sample and out-of-sample generalization, by first masking meaningless patches and then padding them with the learnable parameters and the information from texts.
Experimental results on real datasets demonstrate that our method outperforms SOTA methods, in terms of different downstream tasks.
arXiv Detail & Related papers (2023-11-30T12:10:22Z) - ITEm: Unsupervised Image-Text Embedding Learning for eCommerce [9.307841602452678]
Product embedding serves as a cornerstone for a wide range of applications in eCommerce.
We present an image-text embedding model (ITEm) that is designed to better attend to image and text modalities.
We evaluate the pre-trained ITEm on two tasks: the search for extremely similar products and the prediction of product categories.
arXiv Detail & Related papers (2023-10-22T15:39:44Z) - Enhancing Textbooks with Visuals from the Web for Improved Learning [50.01434477801967]
In this paper, we investigate the effectiveness of vision-language models to automatically enhance textbooks with images from the web.
We collect a dataset of e-textbooks in the math, science, social science and business domains.
We then set up a text-image matching task that involves retrieving and appropriately assigning web images to textbooks.
arXiv Detail & Related papers (2023-04-18T12:16:39Z) - Brief Introduction to Contrastive Learning Pretext Tasks for Visual
Representation [0.0]
We introduce contrastive learning, a subset of unsupervised learning methods.
The purpose of contrastive learning is to embed augmented samples from the same sample near to each other while pushing away those that are not.
We offer some strategies from contrastive learning that have recently been published and are focused on pretext tasks for visual representation.
arXiv Detail & Related papers (2022-10-06T18:54:10Z) - Adaptive Text Recognition through Visual Matching [86.40870804449737]
We introduce a new model that exploits the repetitive nature of characters in languages.
By doing this, we turn text recognition into a shape matching problem.
We show that it can handle challenges that traditional architectures are not able to solve without expensive retraining.
arXiv Detail & Related papers (2020-09-14T17:48:53Z) - Improving Disentangled Text Representation Learning with
Information-Theoretic Guidance [99.68851329919858]
discrete nature of natural language makes disentangling of textual representations more challenging.
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text.
Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation.
arXiv Detail & Related papers (2020-06-01T03:36:01Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.