Generative AI trial for nonviolent communication mediation
- URL: http://arxiv.org/abs/2308.03326v1
- Date: Mon, 7 Aug 2023 06:19:29 GMT
- Title: Generative AI trial for nonviolent communication mediation
- Authors: Takeshi Kato
- Abstract summary: ChatGPT was used in place of the traditional certified trainer to test the possibility of mediating input sentences.
Results indicate that there is potential for the application of generative AI, although not yet at a practical level.
It is hoped that the widespread use of NVC mediation using generative AI will lead to the early realization of a mixbiotic society.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aiming for a mixbiotic society that combines freedom and solidarity among
people with diverse values, I focused on nonviolent communication (NVC) that
enables compassionate giving in various situations of social division and
conflict, and tried a generative AI for it. Specifically, ChatGPT was used in
place of the traditional certified trainer to test the possibility of mediating
(modifying) input sentences in four processes: observation, feelings, needs,
and requests. The results indicate that there is potential for the application
of generative AI, although not yet at a practical level. Suggested improvement
guidelines included adding model responses, relearning revised responses,
specifying appropriate terminology for each process, and re-asking for required
information. The use of generative AI will be useful initially to assist
certified trainers, to prepare for and review events and workshops, and in the
future to support consensus building and cooperative behavior in digital
democracy, platform cooperatives, and cyber-human social co-operating systems.
It is hoped that the widespread use of NVC mediation using generative AI will
lead to the early realization of a mixbiotic society.
Related papers
- An AI System Evaluation Framework for Advancing AI Safety: Terminology, Taxonomy, Lifecycle Mapping [23.92695048003188]
This paper proposes a framework for AI system evaluation comprising three components.
This framework catalyses a deeper discourse on AI system evaluation beyond model-centric approaches.
arXiv Detail & Related papers (2024-04-08T10:49:59Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Adversarial Attacks in Cooperative AI [0.0]
Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation.
Recent work in adversarial machine learning shows that models can be easily deceived into making incorrect decisions.
Cooperative AI might introduce new weaknesses not investigated in previous machine learning research.
arXiv Detail & Related papers (2021-11-29T07:34:12Z) - Envisioning Communities: A Participatory Approach Towards AI for Social
Good [10.504838259488844]
We argue that AI for social good ought to be assessed by the communities that the AI system will impact.
We show how the capabilities approach aligns with a participatory approach for the design and implementation of AI for social good research.
arXiv Detail & Related papers (2021-05-04T21:40:04Z) - Descriptive AI Ethics: Collecting and Understanding the Public Opinion [10.26464021472619]
This work proposes a mixed AI ethics model that allows normative and descriptive research to complement each other.
We discuss its implications on bridging the gap between optimistic and pessimistic views towards AI systems' deployment.
arXiv Detail & Related papers (2021-01-15T03:46:27Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.