DoYouTrustAI: A Tool to Teach Students About AI Misinformation and Prompt Engineering
- URL: http://arxiv.org/abs/2504.13859v1
- Date: Sat, 22 Mar 2025 19:11:57 GMT
- Title: DoYouTrustAI: A Tool to Teach Students About AI Misinformation and Prompt Engineering
- Authors: Phillip Driscoll, Priyanka Kumar,
- Abstract summary: DoYouTrustAI is a web-based application that helps students enhance critical thinking by identifying misleading information in LLM responses about major historical figures.<n>The tool lets users select familiar individuals for testing to reduce random guessing and presents misinformation alongside known facts to maintain believability.<n>It also provides pre-configured prompt instructions to show how different prompts affect AI responses.
- Score: 2.3020018305241337
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: AI, especially Large Language Models (LLMs) like ChatGPT, have rapidly developed and gained widespread adoption in the past five years, shifting user preference from traditional search engines. However, the generative nature of LLMs raises concerns about presenting misinformation as fact. To address this, we developed a web-based application that helps K-12 students enhance critical thinking by identifying misleading information in LLM responses about major historical figures. In this paper, we describe the implementation and design details of the DoYouTrustAI tool, which can be used to provide an interactive lesson which teaches students about the dangers of misinformation and how believable generative AI can make it seem. The DoYouTrustAI tool utilizes prompt engineering to present the user with AI generated summaries about the life of a historical figure. These summaries can be either accurate accounts of that persons life, or an intentionally misleading alteration of their history. The user is tasked with determining the validity of the statement without external resources. Our research questions for this work were:(RQ1) How can we design a tool that teaches students about the dangers of misleading information and of how misinformation can present itself in LLM responses? (RQ2) Can we present prompt engineering as a topic that is easily understandable for students? Our findings highlight the need to correct misleading information before users retain it. Our tool lets users select familiar individuals for testing to reduce random guessing and presents misinformation alongside known facts to maintain believability. It also provides pre-configured prompt instructions to show how different prompts affect AI responses. Together, these features create a controlled environment where users learn the importance of verifying AI responses and understanding prompt engineering.
Related papers
- Analyzing User Perceptions of Large Language Models (LLMs) on Reddit: Sentiment and Topic Modeling of ChatGPT and DeepSeek Discussions [0.0]
This study aims at analyzing Reddit discussions about ChatGPT and DeepSeek using sentiment and topic modeling.
Report mentions whether users have faith in the technology and what they see as its future.
arXiv Detail & Related papers (2025-02-22T17:00:42Z) - How Do Programming Students Use Generative AI? [7.863638253070439]
We studied how programming students actually use generative AI tools like ChatGPT.<n>We observed two prevalent usage strategies: to seek knowledge about general concepts and to directly generate solutions.<n>Our findings indicate that concerns about potential decrease in programmers' agency and productivity with Generative AI are justified.
arXiv Detail & Related papers (2025-01-17T10:25:41Z) - Emotional Manipulation Through Prompt Engineering Amplifies
Disinformation Generation in AI Large Language Models [0.0]
This study investigates the generation of synthetic disinformation by OpenAI's Large Language Models (LLMs) through prompt engineering and explores their responsiveness to emotional prompting.
Our findings, based on a corpus of 19, synthetic disinformation social media posts, reveal that all LLMs by OpenAI can successfully produce disinformation.
When prompted politely, all examined LLMs consistently generate disinformation at a high frequency.
However, when prompted impolitely, the frequency of disinformation production diminishes, as the models often refuse to generate disinformation and instead caution users that the tool is not intended for such purposes.
arXiv Detail & Related papers (2024-03-06T08:50:25Z) - Farsight: Fostering Responsible AI Awareness During AI Application Prototyping [32.235398722593544]
We present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping.
Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms.
We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers.
arXiv Detail & Related papers (2024-02-23T14:38:05Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.