Advancing a Model of Students' Intentional Persistence in Machine
Learning and Artificial Intelligence
- URL: http://arxiv.org/abs/2311.10744v1
- Date: Mon, 30 Oct 2023 19:57:40 GMT
- Title: Advancing a Model of Students' Intentional Persistence in Machine
Learning and Artificial Intelligence
- Authors: Sharon Ferguson, Katherine Mao, James Magarian, Alison Olechowski
- Abstract summary: The persistence of diverse populations has been studied in engineering.
Short-term intentional persistence is associated with academic enrollment factors such as major and level of study.
Long-term intentional persistence is correlated with measures of professional role confidence.
- Score: 0.9217021281095907
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine Learning (ML) and Artificial Intelligence (AI) are powering the
applications we use, the decisions we make, and the decisions made about us. We
have seen numerous examples of non-equitable outcomes, from facial recognition
algorithms to recidivism algorithms, when they are designed without diversity
in mind. Thus, we must take action to promote diversity among those in this
field. A critical step in this work is understanding why some students who
choose to study ML/AI later leave the field. While the persistence of diverse
populations has been studied in engineering, there is a lack of research
investigating factors that influence persistence in ML/AI. In this work, we
present the advancement of a model of intentional persistence in ML/AI by
surveying students in ML/AI courses. We examine persistence across demographic
groups, such as gender, international student status, student loan status, and
visible minority status. We investigate independent variables that distinguish
ML/AI from other STEM fields, such as the varying emphasis on non-technical
skills, the ambiguous ethical implications of the work, and the highly
competitive and lucrative nature of the field. Our findings suggest that
short-term intentional persistence is associated with academic enrollment
factors such as major and level of study. Long-term intentional persistence is
correlated with measures of professional role confidence. Unique to our study,
we show that wanting your work to have a positive social benefit is a negative
predictor of long-term intentional persistence, and women generally care more
about this. We provide recommendations to educators to meaningfully discuss
ML/AI ethics in classes and encourage the development of interpersonal skills
to help increase diversity in the field.
Related papers
- Causal Feature Selection for Responsible Machine Learning [14.082894268627124]
The need for responsible machine learning has emerged, focusing on aligning ML models to ethical and social values.
This survey addresses four main issues: interpretability, fairness, adversarial generalization, and domain robustness.
arXiv Detail & Related papers (2024-02-05T03:20:28Z) - "Just a little bit on the outside for the whole time": Social belonging
confidence and the persistence of Machine Learning and Artificial
Intelligence students [0.9217021281095907]
The growing field of machine learning (ML) and artificial intelligence (AI) presents a unique and unexplored case within persistence research.
We conduct an exploratory study to gain an initial understanding of persistence in this field.
We discuss differences in how students describe being motivated by social belonging and the importance of close mentorship.
arXiv Detail & Related papers (2023-10-30T19:59:38Z) - Towards Fair and Explainable AI using a Human-Centered AI Approach [5.888646114353372]
We present 5 research projects that aim to enhance explainability and fairness in classification systems and word embeddings.
The first project explores the utility/downsides of introducing local model explanations as interfaces for machine teachers.
The second project presents D-BIAS, a causality-based human-in-the-loop visual tool for identifying and mitigating social biases in datasets.
The third project presents WordBias, a visual interactive tool that helps audit pre-trained static word embeddings for biases against groups.
The fourth project presents DramatVis Personae, a visual analytics tool that helps identify social
arXiv Detail & Related papers (2023-06-12T21:08:55Z) - Do Large Language Models Know What They Don't Know? [74.65014158544011]
Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.
Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend.
This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions.
arXiv Detail & Related papers (2023-05-29T15:30:13Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Fairness and Bias in Robot Learning [7.517692820105885]
We present the first survey on fairness in robot learning from an interdisciplinary perspective spanning technical, ethical, and legal challenges.
We propose a taxonomy for sources of bias and the resulting types of discrimination due to them.
We present early advances in the field by covering different fairness definitions, ethical and legal considerations, and methods for fair robot learning.
arXiv Detail & Related papers (2022-07-07T17:20:15Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.