The Essentials of AI for Life and Society: An AI Literacy Course for the University Community
- URL: http://arxiv.org/abs/2501.07392v1
- Date: Mon, 13 Jan 2025 15:08:32 GMT
- Title: The Essentials of AI for Life and Society: An AI Literacy Course for the University Community
- Authors: Joydeep Biswas, Don Fussell, Peter Stone, Kristin Patterson, Kristen Procko, Lea Sabatini, Zifan Xu,
- Abstract summary: We describe the development of a one-credit course to promote AI literacy at The University of Texas at Austin.
We designed a 14-week seminar-style course that incorporated an interdisciplinary group of speakers who lectured on topics ranging from the fundamentals of AI to societal concerns including disinformation and employment.
- Score: 32.84978932512978
- License:
- Abstract: We describe the development of a one-credit course to promote AI literacy at The University of Texas at Austin. In response to a call for the rapid deployment of class to serve a broad audience in Fall of 2023, we designed a 14-week seminar-style course that incorporated an interdisciplinary group of speakers who lectured on topics ranging from the fundamentals of AI to societal concerns including disinformation and employment. University students, faculty, and staff, and even community members outside of the University, were invited to enroll in this online offering: The Essentials of AI for Life and Society. We collected feedback from course participants through weekly reflections and a final survey. Satisfyingly, we found that attendees reported gains in their AI literacy. We sought critical feedback through quantitative and qualitative analysis, which uncovered challenges in designing a course for this general audience. We utilized the course feedback to design a three-credit version of the course that is being offered in Fall of 2024. The lessons we learned and our plans for this new iteration may serve as a guide to instructors designing AI courses for a broad audience.
Related papers
- Semantic Web and Creative AI -- A Technical Report from ISWS 2023 [33.703842036547655]
The International Semantic Web Research School (ISWS) is a week-long intensive program designed to immerse participants in the field.
This document reports a collaborative effort performed by ten teams of students, each guided by a senior researcher as their mentor.
The 2023 edition of ISWS focuses on the intersection of Semantic Web technologies and Creative AI.
arXiv Detail & Related papers (2025-01-30T18:10:16Z) - Future of Information Retrieval Research in the Age of Generative AI [61.56371468069577]
In the fast-evolving field of information retrieval (IR), the integration of generative AI technologies such as large language models (LLMs) is transforming how users search for and interact with information.
Recognizing this paradigm shift, a visioning workshop was held in July 2024 to discuss the future of IR in the age of generative AI.
This report contains a summary of discussions as potentially important research topics and contains a list of recommendations for academics, industry practitioners, institutions, evaluation campaigns, and funding agencies.
arXiv Detail & Related papers (2024-12-03T00:01:48Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [176.39275404745098]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - Visions of a Discipline: Analyzing Introductory AI Courses on YouTube [11.209406323898019]
We analyze the 20 most watched introductory AI courses on YouTube.
Introductory AI courses do not meaningfully engage with ethical or societal challenges of AI.
We recommend that introductory AI courses should highlight ethical challenges of AI to present a more balanced perspective.
arXiv Detail & Related papers (2024-05-31T01:48:42Z) - Perspectives on the State and Future of Deep Learning -- 2023 [237.1458929375047]
The goal of this series is to chronicle opinions and issues in the field of machine learning as they stand today and as they change over time.
The plan is to host this survey periodically until the AI singularity paperclip-frenzy-driven doomsday, keeping an updated list of topical questions and interviewing new community members for each edition.
arXiv Detail & Related papers (2023-12-07T19:58:37Z) - Artificial Intelligence and Life in 2030: The One Hundred Year Study on
Artificial Intelligence [74.2630823914258]
The report examines eight domains of typical urban settings on which AI is likely to have impact over the coming years.
It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI.
The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University.
arXiv Detail & Related papers (2022-10-31T18:35:36Z) - Gathering Strength, Gathering Storms: The One Hundred Year Study on
Artificial Intelligence (AI100) 2021 Study Panel Report [40.38252510399319]
"Gathering Strength, Gathering Storms" is the second report in the "One Hundred Year Study on Artificial Intelligence" project.
It was written by a panel of 17 study authors, each of whom is deeply rooted in AI research.
The report concludes that AI has made a major leap from the lab to people's lives in recent years.
arXiv Detail & Related papers (2022-10-27T21:00:36Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Teaching Fairness, Accountability, Confidentiality, and Transparency in
Artificial Intelligence through the Lens of Reproducibility [38.87910190291545]
We explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam.
The focal point of the course is a group project based on existing FACT-AI algorithms from top AI conferences, and writing a report about their experiences.
arXiv Detail & Related papers (2021-11-01T10:58:35Z) - Explaining decisions made with AI: A workbook (Use case 1: AI-assisted
recruitment tool) [0.0]
The Alan Turing Institute and the Information Commissioner's Office have been working together to tackle the difficult issues surrounding explainable AI.
The ultimate product of this joint endeavour, Explaining decisions made with AI, published in May 2020, is the most comprehensive practical guidance on AI explanation produced anywhere to date.
The goal of the workbook is to summarise some of main themes from Explaining decisions made with AI and then to provide the materials for a workshop exercise.
arXiv Detail & Related papers (2021-03-20T17:03:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.