A Framework for Responsible Development of Automated Student Feedback
with Generative AI
- URL: http://arxiv.org/abs/2308.15334v1
- Date: Tue, 29 Aug 2023 14:29:57 GMT
- Title: A Framework for Responsible Development of Automated Student Feedback
with Generative AI
- Authors: Euan D Lindsay, Aditya Johri, Johannes Bjerva
- Abstract summary: Recent advances in generative AI provide the opportunity to deliver repeatable, scalable and instant automatically generated feedback to students.
This article will outline the frontiers of automated feedback, identify the ethical issues involved in the provision of automated feedback and present a framework to assist academics to develop such systems responsibly.
- Score: 3.0456580409182155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing rich feedback to students is essential for supporting student
learning. Recent advances in generative AI, particularly within large language
modelling (LLM), provide the opportunity to deliver repeatable, scalable and
instant automatically generated feedback to students, making abundant a
previously scarce and expensive learning resource. Such an approach is feasible
from a technical perspective due to these recent advances in Artificial
Intelligence (AI) and Natural Language Processing (NLP); while the potential
upside is a strong motivator, doing so introduces a range of potential ethical
issues that must be considered as we apply these technologies. The
attractiveness of AI systems is that they can effectively automate the most
mundane tasks; but this risks introducing a "tyranny of the majority", where
the needs of minorities in the long tail are overlooked because they are
difficult to automate.
Developing machine learning models that can generate valuable and authentic
feedback requires the input of human domain experts. The choices we make in
capturing this expertise -- whose, which, when, and how -- will have
significant consequences for the nature of the resulting feedback. How we
maintain our models will affect how that feedback remains relevant given
temporal changes in context, theory, and prior learning profiles of student
cohorts. These questions are important from an ethical perspective; but they
are also important from an operational perspective. Unless they can be
answered, our AI generated systems will lack the trust necessary for them to be
useful features in the contemporary learning environment.
This article will outline the frontiers of automated feedback, identify the
ethical issues involved in the provision of automated feedback and present a
framework to assist academics to develop such systems responsibly.
Related papers
- Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - New Era of Artificial Intelligence in Education: Towards a Sustainable
Multifaceted Revolution [2.94944680995069]
ChatGPT's high performance on standardized academic tests has thrust the topic of artificial intelligence (AI) into the mainstream conversation about the future of education.
This research aims to investigate the potential impact of AI on education through review and analysis of the existing literature across three major axes: applications, advantages, and challenges.
arXiv Detail & Related papers (2023-05-12T08:22:54Z) - AGI: Artificial General Intelligence for Education [41.45039606933712]
This position paper reviews artificial general intelligence (AGI)'s key concepts, capabilities, scope, and potential within future education.
It highlights that AGI can significantly improve intelligent tutoring systems, educational assessment, and evaluation procedures.
The paper emphasizes that AGI's capabilities extend to understanding human emotions and social interactions.
arXiv Detail & Related papers (2023-04-24T22:31:59Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Modelos din\^amicos aplicados \`a aprendizagem de valores em
intelig\^encia artificial [0.0]
Several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment.
It is utmost importance that artificial intelligent agents have their values aligned with human values.
Perhaps this difficulty comes from the way we are addressing the problem of expressing values using cognitive methods.
arXiv Detail & Related papers (2020-07-30T00:56:11Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.