Application of GPT Language Models for Innovation in Activities in University Teaching
- URL: http://arxiv.org/abs/2403.14694v1
- Date: Fri, 15 Mar 2024 14:31:52 GMT
- Title: Application of GPT Language Models for Innovation in Activities in University Teaching
- Authors: Manuel de Buenaga, Francisco Javier Bueno,
- Abstract summary: GPT (Generative Pre-trained Transformer) language models are an artificial intelligence and natural language processing technology.
There is a growing interest in applying GPT language models to university teaching in various dimensions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The GPT (Generative Pre-trained Transformer) language models are an artificial intelligence and natural language processing technology that enables automatic text generation. There is a growing interest in applying GPT language models to university teaching in various dimensions. From the perspective of innovation in student and teacher activities, they can provide support in understanding and generating content, problem-solving, as well as personalization and test correction, among others. From the dimension of internationalization, the misuse of these models represents a global problem that requires taking a series of common measures in universities from different geographical areas. In several countries, there has been a review of assessment tools to ensure that work is done by students and not by AI. To this end, we have conducted a detailed experiment in a representative subject of Computer Science such as Software Engineering, which has focused on evaluating the use of ChatGPT as an assistant in theory activities, exercises, and laboratory practices, assessing its potential use as a support tool for both students and teachers.
Related papers
- Beyond the Hype: A Cautionary Tale of ChatGPT in the Programming Classroom [0.0]
The paper provides insights for academics who teach programming to create more challenging exercises and how to engage responsibly in the use of ChatGPT to promote classroom integrity.
We analyzed the various practical programming examples from past IS exercises and compared those with memos created by tutors and lecturers in a university setting.
arXiv Detail & Related papers (2024-06-16T23:52:37Z) - Enhancing Essay Scoring with Adversarial Weights Perturbation and
Metric-specific AttentionPooling [18.182517741584707]
This study explores the application of BERT-related techniques to enhance the assessment of ELLs' writing proficiency.
To address the specific needs of ELLs, we propose the use of DeBERTa, a state-of-the-art neural language model.
arXiv Detail & Related papers (2024-01-06T06:05:12Z) - On the application of Large Language Models for language teaching and
assessment technology [18.735612275207853]
We look at the potential for incorporating large language models in AI-driven language teaching and assessment systems.
We find that larger language models offer improvements over previous models in text generation.
For automated grading and grammatical error correction, tasks whose progress is checked on well-known benchmarks, early investigations indicate that large language models on their own do not improve on state-of-the-art results.
arXiv Detail & Related papers (2023-07-17T11:12:56Z) - BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained
Transformer [77.28871523946418]
BatGPT is a large-scale language model designed and trained jointly by Wuhan University and Shanghai Jiao Tong University.
It is capable of generating highly natural and fluent text in response to various types of input, including text prompts, images, and audio.
arXiv Detail & Related papers (2023-07-01T15:10:01Z) - Evaluating Language Models for Mathematics through Interactions [116.67206980096513]
We introduce CheckMate, a prototype platform for humans to interact with and evaluate large language models (LLMs)
We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics.
We derive a taxonomy of human behaviours and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness.
arXiv Detail & Related papers (2023-06-02T17:12:25Z) - Generative Pre-trained Transformer: A Comprehensive Review on Enabling
Technologies, Potential Applications, Emerging Challenges, and Future
Directions [11.959434388955787]
The Generative Pre-trained Transformer (GPT) represents a notable breakthrough in the domain of natural language processing.
GPT is based on the transformer architecture, a deep neural network designed for natural language processing tasks.
arXiv Detail & Related papers (2023-05-11T19:20:38Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - ChatGPT and a New Academic Reality: Artificial Intelligence-Written
Research Papers and the Ethics of the Large Language Models in Scholarly
Publishing [6.109522330180625]
ChatGPT is a generative pre-trained transformer, which uses natural language processing to fulfill text-based user requests.
Potential ethical issues that could arise with the emergence of large language models like GPT-3 are discussed.
arXiv Detail & Related papers (2023-03-21T14:35:07Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - Solving Quantitative Reasoning Problems with Language Models [53.53969870599973]
We introduce Minerva, a large language model pretrained on general natural language data and further trained on technical content.
The model achieves state-of-the-art performance on technical benchmarks without the use of external tools.
We also evaluate our model on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other sciences.
arXiv Detail & Related papers (2022-06-29T18:54:49Z) - Exploring Dimensionality Reduction Techniques in Multilingual
Transformers [64.78260098263489]
This paper gives a comprehensive account of the impact of dimensional reduction techniques on the performance of state-of-the-art multilingual Siamese Transformers.
It shows that it is possible to achieve an average reduction in the number of dimensions of $91.58% pm 2.59%$ and $54.65% pm 32.20%$, respectively.
arXiv Detail & Related papers (2022-04-18T17:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.