Words of Wisdom: Representational Harms in Learning From AI
Communication
- URL: http://arxiv.org/abs/2111.08581v1
- Date: Tue, 16 Nov 2021 15:59:49 GMT
- Title: Words of Wisdom: Representational Harms in Learning From AI
Communication
- Authors: Amanda Buddemeyer, Erin Walker, Malihe Alikhani
- Abstract summary: We contend that all language, including all AI communication, encodes information about the identity of the human or humans who contributed to crafting the language.
With AI communication, however, the user may index identity information that does not match the source.
This can lead to representational harms if language associated with one cultural group is presented as "standard" or "neutral"
- Score: 9.998078491879143
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many educational technologies use artificial intelligence (AI) that presents
generated or produced language to the learner. We contend that all language,
including all AI communication, encodes information about the identity of the
human or humans who contributed to crafting the language. With AI
communication, however, the user may index identity information that does not
match the source. This can lead to representational harms if language
associated with one cultural group is presented as "standard" or "neutral", if
the language advantages one group over another, or if the language reinforces
negative stereotypes. In this work, we discuss a case study using a Visual
Question Generation (VQG) task involving gathering crowdsourced data from
targeted demographic groups. Generated questions will be presented to human
evaluators to understand how they index the identity behind the language,
whether and how they perceive any representational harms, and how they would
ideally address any such harms caused by AI communication. We reflect on the
educational applications of this work as well as the implications for equality,
diversity, and inclusion (EDI).
Related papers
- Harnessing the Power of Artificial Intelligence to Vitalize Endangered Indigenous Languages: Technologies and Experiences [31.62071644137294]
We discuss the decreasing diversity of languages in the world and how working with Indigenous languages poses unique ethical challenges for AI and NLP.
We report encouraging results in the development of high-quality machine learning translators for Indigenous languages.
We present prototypes we have built in projects done in 2023 and 2024 with Indigenous communities in Brazil, aimed at facilitating writing.
arXiv Detail & Related papers (2024-07-17T14:46:37Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Standard Language Ideology in AI-Generated Language [1.2815904071470705]
We explore standard language ideology in language generated by large language models (LLMs)
We introduce the concept of standard AI-generated language ideology, the process by which AI-generated language regards Standard American English (SAE) as a linguistic default and reinforces a linguistic bias that SAE is the most "appropriate" language.
arXiv Detail & Related papers (2024-06-13T01:08:40Z) - Distributed agency in second language learning and teaching through generative AI [0.0]
ChatGPT can provide informal second language practice through chats in written or voice forms.
Instructors can use AI to build learning and assessment materials in a variety of media.
arXiv Detail & Related papers (2024-03-29T14:55:40Z) - Trust and ethical considerations in a multi-modal, explainable AI-driven
chatbot tutoring system: The case of collaboratively solving Rubik's Cube [14.012087492118015]
Prominent ethical issues in high school AI education include data privacy, information leakage, abusive language, and fairness.
This paper describes technological components that were built to address ethical and trustworthy concerns in a multi-modal collaborative platform.
In data privacy, we want to ensure that the informed consent of children, parents, and teachers, is at the center of any data that is managed.
arXiv Detail & Related papers (2024-01-30T16:33:21Z) - Language-Driven Representation Learning for Robotics [115.93273609767145]
Recent work in visual representation learning for robotics demonstrates the viability of learning from large video datasets of humans performing everyday tasks.
We introduce a framework for language-driven representation learning from human videos and captions.
We find that Voltron's language-driven learning outperform the prior-of-the-art, especially on targeted problems requiring higher-level control.
arXiv Detail & Related papers (2023-02-24T17:29:31Z) - Robotic Skill Acquisition via Instruction Augmentation with
Vision-Language Models [70.82705830137708]
We introduce Data-driven Instruction Augmentation for Language-conditioned control (DIAL)
We utilize semi-language labels leveraging the semantic understanding of CLIP to propagate knowledge onto large datasets of unlabelled demonstration data.
DIAL enables imitation learning policies to acquire new capabilities and generalize to 60 novel instructions unseen in the original dataset.
arXiv Detail & Related papers (2022-11-21T18:56:00Z) - Human Heuristics for AI-Generated Language Are Flawed [8.465228064780744]
We study whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI.
We experimentally demonstrate that these wordings make human judgment of AI-generated language predictable and manipulable.
We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI.
arXiv Detail & Related papers (2022-06-15T03:18:56Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.