p for political: Participation Without Agency Is Not Enough
- URL: http://arxiv.org/abs/2005.03534v1
- Date: Thu, 7 May 2020 14:59:59 GMT
- Title: p for political: Participation Without Agency Is Not Enough
- Authors: Aakash Gautam, Deborah Tatar
- Abstract summary: We reflect on the results of a series of activities aimed at supporting agentic-future-envisionment with a group of sex-trafficking survivors in Nepal.
We argue that building participant agency on the small and personal interactions is necessary before demanding larger Political participation.
- Score: 2.0305676256390934
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Participatory Design's vision of democratic participation assumes
participants' feelings of agency in envisioning a collective future. But this
assumption may be leaky when dealing with vulnerable populations. We reflect on
the results of a series of activities aimed at supporting
agentic-future-envisionment with a group of sex-trafficking survivors in Nepal.
We observed a growing sense among the survivors that they could play a role in
bringing about change in their families. They also became aware of how they
could interact with available institutional resources. Reflecting on the
observations, we argue that building participant agency on the small and
personal interactions is necessary before demanding larger Political
participation. In particular, a value of PD, especially for vulnerable
populations, can lie in the process itself if it helps participants position
themselves as actors in the larger world.
Related papers
- Multi-Agents are Social Groups: Investigating Social Influence of Multiple Agents in Human-Agent Interactions [7.421573539569854]
We investigate whether a group of AI agents can create social pressure on users to agree with them.
We found that conversing with multiple agents increased the social pressure felt by participants.
Our study shows the potential advantages of multi-agent systems over single-agent platforms in causing opinion change.
arXiv Detail & Related papers (2024-11-07T10:00:46Z) - Causal Modeling of Climate Activism on Reddit [4.999814847776098]
We develop a comprehensive causal model of how and why Reddit users engage with activist communities driving mass climate protests.
We find that among users interested in climate change, participation in online activist communities is indeed influenced by direct interactions with activists.
Among people aware of climate change, left-leaning people from lower socioeconomic backgrounds are particularly represented in online activist groups.
arXiv Detail & Related papers (2024-10-14T14:41:09Z) - I Want to Break Free! Persuasion and Anti-Social Behavior of LLMs in Multi-Agent Settings with Social Hierarchy [13.68625980741047]
We study interaction patterns of Large Language Model (LLM)-based agents in a context characterized by strict social hierarchy.
We study two types of phenomena: persuasion and anti-social behavior in simulated scenarios involving a guard and a prisoner agent.
arXiv Detail & Related papers (2024-10-09T17:45:47Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis [62.60458710368311]
This paper presents BattleAgent, an emulation system that combines the Large Vision-Language Model and Multi-agent System.
It aims to simulate complex dynamic interactions among multiple agents, as well as between agents and their environments.
It emulates both the decision-making processes of leaders and the viewpoints of ordinary participants, such as soldiers.
arXiv Detail & Related papers (2024-04-23T21:37:22Z) - Towards Human-centered Proactive Conversational Agents [60.57226361075793]
The distinction between a proactive and a reactive system lies in the proactive system's initiative-taking nature.
We establish a new taxonomy concerning three key dimensions of human-centered PCAs, namely Intelligence, Adaptivity, and Civility.
arXiv Detail & Related papers (2024-04-19T07:14:31Z) - The Wisdom of Partisan Crowds: Comparing Collective Intelligence in
Humans and LLM-based Agents [7.986590413263814]
"Wisdom of partisan crowds" is a phenomenon known as the "wisdom of partisan crowds"
We find that partisan crowds display human-like partisan biases, but also converge to more accurate beliefs through deliberation as humans do.
We identify several factors that interfere with convergence, including the use of chain-of-thought prompt and lack of details in personas.
arXiv Detail & Related papers (2023-11-16T08:30:15Z) - Empowering Participation Within Structures of Dependency [2.0305676256390934]
We reflect on our five-year engagement with survivors of sex trafficking in Nepal.
We sought to bring change by exploring possibilities based on the survivors' existing assets.
We highlight the challenges we faced, uncovering actions that PD practitioners can take.
arXiv Detail & Related papers (2022-07-19T08:50:31Z) - This Must Be the Place: Predicting Engagement of Online Communities in a
Large-scale Distributed Campaign [70.69387048368849]
We study the behavior of communities with millions of active members.
We develop a hybrid model, combining textual cues, community meta-data, and structural properties.
We demonstrate the applicability of our model through Reddit's r/place a large-scale online experiment.
arXiv Detail & Related papers (2022-01-14T08:23:16Z) - Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria [57.74495091445414]
Social deduction games offer an avenue to study how individuals might learn to synthesize potentially unreliable information about others.
In this work, we present Hidden Agenda, a two-team social deduction game that provides a 2D environment for studying learning agents in scenarios of unknown team alignment.
Reinforcement learning agents trained in Hidden Agenda show that agents can learn a variety of behaviors, including partnering and voting without need for communication in natural language.
arXiv Detail & Related papers (2022-01-05T20:54:10Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.