ChatGPT is all you need to decolonize sub-Saharan Vocational Education
- URL: http://arxiv.org/abs/2304.13728v1
- Date: Tue, 11 Apr 2023 23:50:37 GMT
- Title: ChatGPT is all you need to decolonize sub-Saharan Vocational Education
- Authors: Isidora Tourni, Georgios Grigorakis, Isidoros Marougkas, Konstantinos
Dafnis, Vassiliki Tassopoulou
- Abstract summary: This position paper makes the case for an educational policy framework that would succeed in this transformation.
We highlight substantial applications of Large Language Models, tailor-made to their respective cultural background.
We provide specific historical examples of diverse states successfully implementing such policies in the elementary steps of their socioeconomic transformation.
- Score: 0.6999740786886537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advances of Generative AI models with interactive capabilities over the
past few years offer unique opportunities for socioeconomic mobility. Their
potential for scalability, accessibility, affordability, personalizing and
convenience sets a first-class opportunity for poverty-stricken countries to
adapt and modernize their educational order. As a result, this position paper
makes the case for an educational policy framework that would succeed in this
transformation by prioritizing vocational and technical training over academic
education in sub-Saharan African countries. We highlight substantial
applications of Large Language Models, tailor-made to their respective cultural
background(s) and needs, that would reinforce their systemic decolonization.
Lastly, we provide specific historical examples of diverse states successfully
implementing such policies in the elementary steps of their socioeconomic
transformation, in order to corroborate our proposal to sub-Saharan African
countries to follow their lead.
Related papers
- AI-powered Digital Framework for Personalized Economical Quality Learning at Scale [0.7864304771129749]
This paper proposes an AI-powered digital learning framework grounded in Deep Learning (DL) theory.
We outline eight key principles derived from learning science and AI that are essential for implementing DL-based Digital Learning Environments.
arXiv Detail & Related papers (2024-11-20T17:44:29Z) - Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents [78.15899922698631]
MAIC (Massive AI-empowered Course) is a new form of online education that leverages LLM-driven multi-agent systems to construct an AI-augmented classroom.
We conduct preliminary experiments at Tsinghua University, one of China's leading universities.
arXiv Detail & Related papers (2024-09-05T13:22:51Z) - Generative AI and Digital Neocolonialism in Global Education: Towards an Equitable Framework [0.5586073503694489]
This paper critically discusses how generative artificial intelligence (GenAI) might impose Western ideologies on non-Western societies.
It suggests strategies for local and global stakeholders to mitigate these effects.
arXiv Detail & Related papers (2024-06-05T05:43:55Z) - Course-Skill Atlas: A national longitudinal dataset of skills taught in U.S. higher education curricula [0.7499722271664147]
Course-Skill Atlas is a longitudinal dataset of skills inferred from over three million course syllabi taught at nearly three thousand U.S. higher education institutions.
Our dataset offers a large-scale representation of college education's role in preparing students for the labor market.
arXiv Detail & Related papers (2024-04-19T20:14:15Z) - Scalable Language Model with Generalized Continual Learning [58.700439919096155]
The Joint Adaptive Re-ization (JARe) is integrated with Dynamic Task-related Knowledge Retrieval (DTKR) to enable adaptive adjustment of language models based on specific downstream tasks.
Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting.
arXiv Detail & Related papers (2024-04-11T04:22:15Z) - Large Language Models for Education: A Survey and Outlook [69.02214694865229]
We systematically review the technological advancements in each perspective, organize related datasets and benchmarks, and identify the risks and challenges associated with deploying LLMs in education.
Our survey aims to provide a comprehensive technological picture for educators, researchers, and policymakers to harness the power of LLMs to revolutionize educational practices and foster a more effective personalized learning environment.
arXiv Detail & Related papers (2024-03-26T21:04:29Z) - The challenges of massification in higher education in Africa [0.0]
The number of students in large groups (over 3,000 in some courses) raises issues of training quality and equity.
Access to this type of training requires special training conditions and infrastructures that are not always available in developing countries.
This work can be transposed to other African countries with similar needs and will open the way to a solution analogous to intelligent classrooms for face-to-face courses.
arXiv Detail & Related papers (2024-02-20T10:42:19Z) - Survey of Social Bias in Vision-Language Models [65.44579542312489]
Survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.
The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models.
arXiv Detail & Related papers (2023-09-24T15:34:56Z) - On the Opportunities and Risks of Foundation Models [256.61956234436553]
We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration.
arXiv Detail & Related papers (2021-08-16T17:50:08Z) - Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social
Commonsense [6.335245542129822]
We focus on the Social IQA dataset, a task requiring social and emotional commonsense reasoning.
We propose several architecture variations and extensions, as well as leveraging external commonsense corpora.
Our proposed system achieves competitive results as those top-ranking models on the leaderboard.
arXiv Detail & Related papers (2021-05-12T19:18:02Z) - Allocating Opportunities in a Dynamic Model of Intergenerational
Mobility [7.516726228481857]
We develop a model for allocating opportunities in a society that exhibits bottlenecks in mobility.
We show how optimal allocations in our model arise as solutions to continuous optimization problems over multiple generations.
We characterize how the structure of the model can lead to either temporary or persistent affirmative action.
arXiv Detail & Related papers (2021-01-21T05:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.