A Tutorial on Teaching Data Analytics with Generative AI
- URL: http://arxiv.org/abs/2411.07244v1
- Date: Fri, 25 Oct 2024 05:27:48 GMT
- Title: A Tutorial on Teaching Data Analytics with Generative AI
- Authors: Robert L. Bray,
- Abstract summary: This tutorial addresses the challenge of incorporating large language models (LLMs) in a data analytics class.
It details several new in-class and out-of-class teaching techniques enabled by AI.
- Score: 0.0
- License:
- Abstract: This tutorial addresses the challenge of incorporating large language models (LLMs), such as ChatGPT, in a data analytics class. It details several new in-class and out-of-class teaching techniques enabled by AI. For example, instructors can parallelize instruction by having students interact with different custom-made GPTs to learn different parts of an analysis and then teach each other what they learned from their AIs. For another example, instructors can turn problem sets into AI tutoring sessions, whereby a custom-made GPT guides a student through the problems, and the student uploads the chatlog for their homework submission. For a third example, you can assign different labs to each section of your class and have each section create AI assistants to help the other sections work through their labs. This tutorial advocates the programming in the English paradigm, in which students express the desired data transformations in prose and then use AI to generate the corresponding code. Students can wrangle data more effectively by programming in English than by manipulating in Excel. However, some students will program in English better than others, so you will still derive a robust grade distribution (at least with current LLMs).
Related papers
- Integrating AI Tutors in a Programming Course [0.0]
RAGMan is an LLM-powered tutoring system that can support a variety of course-specific and homework-specific AI tutors.
This paper describes the interactions the students had with the AI tutors, the students' feedback, and a comparative grade analysis.
arXiv Detail & Related papers (2024-07-14T00:42:39Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - How to Build an AI Tutor that Can Adapt to Any Course and Provide Accurate Answers Using Large Language Model and Retrieval-Augmented Generation [0.0]
The OpenAI Assistants API allows AI Tutor to easily embed, store, retrieve, and manage files and chat history.
The AI Tutor prototype demonstrates its ability to generate relevant, accurate answers with source citations.
arXiv Detail & Related papers (2023-11-29T15:02:46Z) - Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction [3.1473798197405953]
This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
arXiv Detail & Related papers (2023-11-14T15:48:19Z) - Learning from Teaching Assistants to Program with Subgoals: Exploring
the Potential for AI Teaching Assistants [18.14390906820148]
We investigate the practicality of using generative AI as TAs in programming education by examining novice learners' interaction with TAs in a subgoal learning environment.
Our study shows that learners can solve tasks faster with comparable scores with AI TAs.
We suggest guidelines to better design and utilize generative AI as TAs in programming education from the result of our chat log analysis.
arXiv Detail & Related papers (2023-09-19T08:30:58Z) - AutoML-GPT: Automatic Machine Learning with GPT [74.30699827690596]
We propose developing task-oriented prompts and automatically utilizing large language models (LLMs) to automate the training pipeline.
We present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyper parameters.
This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas.
arXiv Detail & Related papers (2023-05-04T02:09:43Z) - HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging
Face [85.25054021362232]
Large language models (LLMs) have exhibited exceptional abilities in language understanding, generation, interaction, and reasoning.
LLMs could act as a controller to manage existing AI models to solve complicated AI tasks.
We present HuggingGPT, an LLM-powered agent that connects various AI models in machine learning communities.
arXiv Detail & Related papers (2023-03-30T17:48:28Z) - Smart tutor to provide feedback in programming courses [0.0]
We present an AI based intelligent tutor that answers students programming questions.
The tool has been tested by university students at the URJC along a whole course.
arXiv Detail & Related papers (2023-01-24T11:00:06Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.