Towards an Operational Responsible AI Framework for Learning Analytics in Higher Education
- URL: http://arxiv.org/abs/2410.05827v1
- Date: Tue, 8 Oct 2024 08:55:24 GMT
- Title: Towards an Operational Responsible AI Framework for Learning Analytics in Higher Education
- Authors: Alba Morales Tirado, Paul Mulholland, Miriam Fernandez,
- Abstract summary: We map 11 established Responsible AI frameworks, including those by leading tech companies, to the context of LA in Higher Education.
This led to the identification of seven key principles such as transparency, fairness, and accountability.
We present a novel framework that offers practical guidance to HE institutions and is designed to evolve with community input.
- Score: 0.2796197251957245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universities are increasingly adopting data-driven strategies to enhance student success, with AI applications like Learning Analytics (LA) and Predictive Learning Analytics (PLA) playing a key role in identifying at-risk students, personalising learning, supporting teachers, and guiding educational decision-making. However, concerns are rising about potential harms these systems may pose, such as algorithmic biases leading to unequal support for minority students. While many have explored the need for Responsible AI in LA, existing works often lack practical guidance for how institutions can operationalise these principles. In this paper, we propose a novel Responsible AI framework tailored specifically to LA in Higher Education (HE). We started by mapping 11 established Responsible AI frameworks, including those by leading tech companies, to the context of LA in HE. This led to the identification of seven key principles such as transparency, fairness, and accountability. We then conducted a systematic review of the literature to understand how these principles have been applied in practice. Drawing from these findings, we present a novel framework that offers practical guidance to HE institutions and is designed to evolve with community input, ensuring its relevance as LA systems continue to develop.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities [14.26619701452836]
Generative AI has drawn significant attention from stakeholders in higher education.
It simultaneously poses challenges to academic integrity and leads to ethical issues.
Leading universities have already published guidelines on Generative AI.
This study focuses on strategies for responsible AI governance as demonstrated in these guidelines.
arXiv Detail & Related papers (2024-09-03T16:06:45Z) - Trustworthy AI in practice: an analysis of practitioners' needs and challenges [2.5788518098820337]
A plethora of frameworks and guidelines have appeared to support practitioners in implementing Trustworthy AI applications.
We study the vision AI practitioners have on TAI principles, how they address them, and what they would like to have.
We highlight recommendations to help AI practitioners develop Trustworthy AI applications.
arXiv Detail & Related papers (2024-05-15T13:02:46Z) - A University Framework for the Responsible use of Generative AI in Research [0.0]
Generative Artificial Intelligence (generative AI) poses both opportunities and risks for the integrity of research.
We propose a framework to help institutions promote and facilitate the responsible use of generative AI.
arXiv Detail & Related papers (2024-04-30T04:00:15Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Responsible AI Governance: A Systematic Literature Review [8.318630741859113]
This paper aims to examine the existing literature on AI Governance.
The focus of this study is to analyse the literature to answer key questions: WHO is accountable for AI systems' governance, WHAT elements are being governed, WHEN governance occurs within the AI development life cycle, and HOW it is executed through various mechanisms like frameworks, tools, standards, policies, or models.
The findings of this study provides a foundational basis for future research and development of comprehensive governance models that align with RAI principles.
arXiv Detail & Related papers (2023-12-18T05:22:36Z) - Towards Goal-oriented Intelligent Tutoring Systems in Online Education [69.06930979754627]
We propose a new task, named Goal-oriented Intelligent Tutoring Systems (GITS)
GITS aims to enable the student's mastery of a designated concept by strategically planning a customized sequence of exercises and assessment.
We propose a novel graph-based reinforcement learning framework, named Planning-Assessment-Interaction (PAI)
arXiv Detail & Related papers (2023-12-03T12:37:16Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.