Clones in the Machine: A Feminist Critique of Agency in Digital Cloning
- URL: http://arxiv.org/abs/2504.18807v1
- Date: Sat, 26 Apr 2025 05:24:35 GMT
- Title: Clones in the Machine: A Feminist Critique of Agency in Digital Cloning
- Authors: Siân Brooke,
- Abstract summary: The paper argues that digital cloning oversimplifies human complexity and risks perpetuating systemic biases.<n>It proposes decentralized data repositories and dynamic consent models, promoting ethical, context-aware AI practices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper critiques digital cloning in academic research, highlighting how it exemplifies AI solutionism. Digital clones, which replicate user data to simulate behavior, are often seen as scalable tools for behavioral insights. However, this framing obscures ethical concerns around consent, agency, and representation. Drawing on feminist theories of agency, the paper argues that digital cloning oversimplifies human complexity and risks perpetuating systemic biases. To address these issues, it proposes decentralized data repositories and dynamic consent models, promoting ethical, context-aware AI practices that challenge the reductionist logic of AI solutionism
Related papers
- Identity Theft in AI Conference Peer Review [50.18240135317708]
We discuss newly uncovered cases of identity theft in the scientific peer-review process within artificial intelligence (AI) research.<n>We detail how dishonest researchers exploit the peer-review system by creating fraudulent reviewer profiles to manipulate paper evaluations.
arXiv Detail & Related papers (2025-08-06T02:36:52Z) - The Philosophic Turn for AI Agents: Replacing centralized digital rhetoric with decentralized truth-seeking [0.0]
In the face of AI technology, individuals will increasingly rely on AI agents to navigate life's growing complexities.<n>This paper addresses a fundamental dilemma posed by AI decision-support systems: the risk of either becoming overwhelmed by complex decisions, or having autonomy compromised.
arXiv Detail & Related papers (2025-04-24T19:34:43Z) - Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones [0.0]
generative AI has enabled the creation of pre-mortem digital twins, AI-driven replicas that mimic the behavior, personality, and knowledge of living individuals.<n>These digital doppelgangers serve various functions, including enhancing productivity, enabling creative collaboration, and preserving personal legacies.<n>However, their development raises critical ethical, legal, and societal concerns.
arXiv Detail & Related papers (2025-02-28T17:18:38Z) - The Digital Ecosystem of Beliefs: does evolution favour AI over humans? [35.14620900061148]
Digital Ecosystem of Beliefs (Digico) is first evolutionary framework for controlled experimentation with multi-population interactions in simulated social networks.<n> framework models a population of agents which change their messaging strategies due to evolutionary updates.<n>Experiments show that when AIs have faster messaging, evolution, and more influence in the recommendation algorithm, they get 80% to 95% of the views.
arXiv Detail & Related papers (2024-12-19T03:48:23Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Silico-centric Theory of Mind [0.2209921757303168]
Theory of Mind (ToM) refers to the ability to attribute mental states, such as beliefs, desires, intentions, and knowledge, to oneself and others.
We investigate ToM in environments with multiple, distinct, independent AI agents.
arXiv Detail & Related papers (2024-03-14T11:22:51Z) - Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation [40.763340315488406]
Large language model behavior is shaped by the language of those with whom they interact.
This capacity and their increasing prevalence online portend that they will intentionally or unintentionally "program" one another.
We discuss opportunities for AI cross-moderation and address ethical issues and design challenges associated with creating and maintaining free-formed AI collectives.
arXiv Detail & Related papers (2024-02-19T22:59:43Z) - Towards a Feminist Metaethics of AI [0.0]
I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI.
Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.
arXiv Detail & Related papers (2023-11-10T13:26:45Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Exosoul: ethical profiling in the digital world [3.6245424131171813]
The project Exosoul aims at developing a personalized software exoskeleton which mediates actions in the digital world according to the moral preferences of the user.
The approach is hybrid, first based on the identification of profiles in a top-down manner, and then on the refinement of profiles by a personalized data-driven approach.
We consider the correlations between ethics positions (idealism and relativism) personality traits (honesty/humility, conscientiousness, Machiavellianism and narcissism) and worldview (normativism)
arXiv Detail & Related papers (2022-03-30T10:54:00Z) - A Word on Machine Ethics: A Response to Jiang et al. (2021) [36.955224006838584]
We focus on a single case study of the recently proposed Delphi model and offer a critique of the project's proposed method of automating morality judgments.
We conclude with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology.
arXiv Detail & Related papers (2021-11-07T19:31:51Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.