From Interests to Insights: An LLM Approach to Course Recommendations Using Natural Language Queries
- URL: http://arxiv.org/abs/2412.19312v2
- Date: Mon, 30 Dec 2024 15:30:23 GMT
- Title: From Interests to Insights: An LLM Approach to Course Recommendations Using Natural Language Queries
- Authors: Hugh Van Deventer, Mark Mills, August Evrard,
- Abstract summary: This paper describes a novel Large Language Model (LLM) course recommendation system.
It applies a Retrieval Augmented Generation (RAG) method to the corpus of course descriptions.
The system first generates an 'ideal' course description based on the user's query.
This description is converted into a search vector using embeddings, which is then used to find actual courses with similar content.
- Score: 0.0
- License:
- Abstract: Most universities in the United States encourage their students to explore academic areas before declaring a major and to acquire academic breadth by satisfying a variety of requirements. Each term, students must choose among many thousands of offerings, spanning dozens of subject areas, a handful of courses to take. The curricular environment is also dynamic, and poor communication and search functions on campus can limit a student's ability to discover new courses of interest. To support both students and their advisers in such a setting, we explore a novel Large Language Model (LLM) course recommendation system that applies a Retrieval Augmented Generation (RAG) method to the corpus of course descriptions. The system first generates an 'ideal' course description based on the user's query. This description is converted into a search vector using embeddings, which is then used to find actual courses with similar content by comparing embedding similarities. We describe the method and assess the quality and fairness of some example prompts. Steps to deploy a pilot system on campus are discussed.
Related papers
- Generating Situated Reflection Triggers about Alternative Solution Paths: A Case Study of Generative AI for Computer-Supported Collaborative Learning [3.2721068185888127]
We present a proof-of-concept application to offer students dynamic and contextualized feedback.
Specifically, we augment an Online Programming Exercise bot for a college-level Cloud Computing course with ChatGPT.
We demonstrate that LLMs can be used to generate highly situated reflection triggers that incorporate details of the collaborative discussion happening in context.
arXiv Detail & Related papers (2024-04-28T17:56:14Z) - Ruffle&Riley: Insights from Designing and Evaluating a Large Language Model-Based Conversational Tutoring System [21.139850269835858]
Conversational tutoring systems (CTSs) offer learning experiences through interactions based on natural language.
We discuss and evaluate a novel type of CTS that leverages recent advances in large language models (LLMs) in two ways.
The system enables AI-assisted content authoring by inducing an easily editable tutoring script automatically from a lesson text.
arXiv Detail & Related papers (2024-04-26T14:57:55Z) - Analyzing LLM Usage in an Advanced Computing Class in India [4.580708389528142]
This study examines the use of large language models (LLMs) by undergraduate and graduate students for programming assignments in advanced computing classes.
We conducted a comprehensive analysis involving 411 students from a Distributed Systems class at an Indian university.
arXiv Detail & Related papers (2024-04-06T12:06:56Z) - Exploring How Multiple Levels of GPT-Generated Programming Hints Support or Disappoint Novices [0.0]
We investigated whether different levels of hints can support students' problem-solving and learning.
We conducted a think-aloud study with 12 novices using the LLM Hint Factory.
We discovered that high-level natural language hints alone can be helpless or even misleading.
arXiv Detail & Related papers (2024-04-02T18:05:26Z) - Helping university students to choose elective courses by using a hybrid
multi-criteria recommendation system with genetic optimization [0.0]
This paper presents a hybrid RS that combines Collaborative Filtering (CF) and Content-based Filtering (CBF)
A Genetic Algorithm (GA) has been developed to automatically discover the optimal RS configuration.
Experimental results show a study of the most relevant criteria for the course recommendation.
arXiv Detail & Related papers (2024-02-13T11:02:12Z) - Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis [91.5632751731927]
Large Language Models such as ChatGPT have showcased remarkable abilities in solving general tasks.
We propose a general framework for utilizing LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders.
We analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results.
arXiv Detail & Related papers (2024-01-10T08:28:56Z) - Cache & Distil: Optimising API Calls to Large Language Models [82.32065572907125]
Large-scale deployment of generative AI tools often depends on costly API calls to a Large Language Model (LLM) to fulfil user queries.
To curtail the frequency of these calls, one can employ a smaller language model -- a student.
This student gradually gains proficiency in independently handling an increasing number of user requests.
arXiv Detail & Related papers (2023-10-20T15:01:55Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Attentional Graph Convolutional Networks for Knowledge Concept
Recommendation in MOOCs in a Heterogeneous View [72.98388321383989]
Massive open online courses ( MOOCs) provide a large-scale and open-access learning opportunity for students to grasp the knowledge.
To attract students' interest, the recommendation system is applied by MOOCs providers to recommend courses to students.
We propose an end-to-end graph neural network-based approach calledAttentionalHeterogeneous Graph Convolutional Deep Knowledge Recommender(ACKRec) for knowledge concept recommendation in MOOCs.
arXiv Detail & Related papers (2020-06-23T18:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.