Teaching Fairness, Accountability, Confidentiality, and Transparency in
Artificial Intelligence through the Lens of Reproducibility
- URL: http://arxiv.org/abs/2111.00826v2
- Date: Tue, 2 Nov 2021 13:06:21 GMT
- Title: Teaching Fairness, Accountability, Confidentiality, and Transparency in
Artificial Intelligence through the Lens of Reproducibility
- Authors: Ana Lucic, Maurits Bleeker, Sami Jullien, Samarth Bhargav, Maarten de
Rijke
- Abstract summary: We explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam.
The focal point of the course is a group project based on existing FACT-AI algorithms from top AI conferences, and writing a report about their experiences.
- Score: 38.87910190291545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we explain the setup for a technical, graduate-level course on
Fairness, Accountability, Confidentiality and Transparency in Artificial
Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI
concepts through the lens of reproducibility. The focal point of the course is
a group project based on reproducing existing FACT-AI algorithms from top AI
conferences, and writing a report about their experiences. In the first
iteration of the course, we created an open source repository with the code
implementations from the group projects. In the second iteration, we encouraged
students to submit their group projects to the Machine Learning Reproducibility
Challenge, which resulted in 9 reports from our course being accepted to the
challenge. We reflect on our experience teaching the course over two academic
years, where one year coincided with a global pandemic, and propose guidelines
for teaching FACT-AI through reproducibility in graduate-level AI programs. We
hope this can be a useful resource for instructors to set up similar courses at
their universities in the future.
Related papers
- O1 Replication Journey: A Strategic Progress Report -- Part 1 [52.062216849476776]
This paper introduces a pioneering approach to artificial intelligence research, embodied in our O1 Replication Journey.
Our methodology addresses critical challenges in modern AI research, including the insularity of prolonged team-based projects.
We propose the journey learning paradigm, which encourages models to learn not just shortcuts, but the complete exploration process.
arXiv Detail & Related papers (2024-10-08T15:13:01Z) - Visions of a Discipline: Analyzing Introductory AI Courses on YouTube [11.209406323898019]
We analyze the 20 most watched introductory AI courses on YouTube.
Introductory AI courses do not meaningfully engage with ethical or societal challenges of AI.
We recommend that introductory AI courses should highlight ethical challenges of AI to present a more balanced perspective.
arXiv Detail & Related papers (2024-05-31T01:48:42Z) - Artificial Intelligence in Everyday Life 2.0: Educating University Students from Different Majors [8.282180585560928]
misunderstandings regarding their capabilities, limitations, and associated advantages and disadvantages are widespread.
In this experience report, we present an overview of an introductory course that we offered to students coming from different majors.
We discuss the assignments and quizzes of the course, which provided students with a firsthand experience of AI processes.
arXiv Detail & Related papers (2024-04-12T08:10:42Z) - Understanding Teacher Perspectives and Experiences after Deployment of
AI Literacy Curriculum in Middle-school Classrooms [12.35885897302579]
We investigate the experiences of seven teachers following their implementation of modules from the MIT RAICA curriculum.
Our analysis suggests that the AI modules expanded our teachers' knowledge in the field.
Our teachers advocated their need for better external support when navigating technological resources.
arXiv Detail & Related papers (2023-12-08T05:36:16Z) - ActiveAI: Introducing AI Literacy for Middle School Learners with
Goal-based Scenario Learning [0.0]
The ActiveAI project addresses key challenges in AI education for grades 7-9 students.
The app incorporates a variety of learner inputs like sliders, steppers, and collectors to enhance understanding.
The project is currently in the implementation stage, leveraging the intelligent tutor design principles for app development.
arXiv Detail & Related papers (2023-08-21T11:43:43Z) - Stimulating student engagement with an AI board game tournament [0.0]
We present a project-based and competition-based bachelor course that gives second-year students an introduction to search methods applied to board games.
In groups of two, students have to use network programming and AI methods to build an AI agent to compete in a board game tournament-othello was this year's game.
arXiv Detail & Related papers (2023-04-22T11:22:00Z) - An Experience Report of Executive-Level Artificial Intelligence
Education in the United Arab Emirates [53.04281982845422]
We present an experience report of teaching an AI course to business executives in the United Arab Emirates (UAE)
Rather than focusing only on theoretical and technical aspects, we developed a course that teaches AI with a view to enabling students to understand how to incorporate it into existing business processes.
arXiv Detail & Related papers (2022-02-02T20:59:53Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.