How We Manage an Army of Teaching Assistants: Experience Report on
Scaling a CS1 Course
- URL: http://arxiv.org/abs/2311.14241v1
- Date: Fri, 24 Nov 2023 01:12:05 GMT
- Title: How We Manage an Army of Teaching Assistants: Experience Report on
Scaling a CS1 Course
- Authors: Ildar Akhmetov, Sadaf Ahmed, Kezziah Ayuno
- Abstract summary: Increase in enrollment numbers poses major challenges in course management.
Three-tier structure for teams, each led by an experienced Lead TA.
Five functional teams, each focusing on a specific area of responsibility: communication, content, "lost student" support, plagiarism, and scheduling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A considerable increase in enrollment numbers poses major challenges in
course management, such as fragmented information sharing, inefficient
meetings, and poor understanding of course activities among a large team of
teaching assistants. To address these challenges, we restructured the course,
drawing inspiration from successful management and educational practices. We
developed an organized, three-tier structure for teams, each led by an
experienced Lead TA. We also formed five functional teams, each focusing on a
specific area of responsibility: communication, content, "lost student"
support, plagiarism, and scheduling. In addition, we updated our recruitment
method for undergraduate TAs, following a model similar to the one used in the
software industry, while also deciding to mentor Lead TAs in place of
traditional training. Our experiences, lessons learned, and future plans for
enhancement have been detailed in this experience report. We emphasize the
value of using management techniques in dealing with large-scale course
handling and invite cooperation to improve the implementation of these
strategies, inviting other institutions to consider and adapt this approach,
tailoring it to their specific needs.
Related papers
- Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning [12.651588927599441]
Instruction tuning aims to align large language models with open-domain instructions and human-preferred responses.
We introduce Task-Aware Curriculum Planning for Instruction Refinement (TAPIR) to select instructions that are difficult for a student LLM to follow.
To balance the student's capabilities, task distributions in training sets are adjusted with responses automatically refined according to their corresponding tasks.
arXiv Detail & Related papers (2024-05-22T08:38:26Z) - Enhancing Student Engagement in Large-Scale Capstone Courses: An Experience Report [2.7629502923028944]
capstone courses offer students a valuable opportunity to gain hands-on experience in software development.
coordinating a capstone course, especially for a large student cohort, can be a daunting task for academic staff.
We outline the iterative development and refinement of our capstone course as it grew substantially in size over a span of six consecutive sessions.
arXiv Detail & Related papers (2024-04-03T23:59:35Z) - Experiential Co-Learning of Software-Developing Agents [83.34027623428096]
Large language models (LLMs) have brought significant changes to various domains, especially in software development.
We introduce Experiential Co-Learning, a novel LLM-agent learning framework.
Experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively.
arXiv Detail & Related papers (2023-12-28T13:50:42Z) - Introspective Action Advising for Interpretable Transfer Learning [7.673465837624365]
Transfer learning can be applied in deep reinforcement learning to accelerate the training of a policy in a target task.
We propose an alternative approach to transfer learning between tasks based on action advising, in which a teacher trained in a source task actively guides a student's exploration in a target task.
arXiv Detail & Related papers (2023-06-21T14:53:33Z) - Learning to Transfer Role Assignment Across Team Sizes [48.43860606706273]
We propose a framework to learn role assignment and transfer across team sizes.
We demonstrate that re-using the role-based credit assignment structure can foster the learning process of larger reinforcement learning teams.
arXiv Detail & Related papers (2022-04-17T11:22:01Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey [53.73359052511171]
Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.
We present a framework for curriculum learning (CL) in RL, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
arXiv Detail & Related papers (2020-03-10T20:41:24Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z) - Learning to Multi-Task Learn for Better Neural Machine Translation [53.06405021125476]
Multi-task learning is an elegant approach to inject linguistic-related biases into neural machine translation models.
We propose a novel framework for learning the training schedule, ie learning to multi-task learn, for the biased-MTL setting of interest.
Experiments show the resulting automatically learned training schedulers are competitive with the best, and lead to up to +1.1 BLEU score improvements.
arXiv Detail & Related papers (2020-01-10T03:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.