Algorithmic Collective Action with Two Collectives
- URL: http://arxiv.org/abs/2505.00195v1
- Date: Wed, 30 Apr 2025 21:39:06 GMT
- Title: Algorithmic Collective Action with Two Collectives
- Authors: Aditya Karan, Nicholas Vincent, Karrie Karahalios, Hari Sundaram,
- Abstract summary: We introduce a first of a kind framework for studying collective action with two or more collectives.<n>We examine how differing objectives, strategies, sizes, and homogeneity can impact a collective's efficacy.
- Score: 18.045224609703897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given that data-dependent algorithmic systems have become impactful in more domains of life, the need for individuals to promote their own interests and hold algorithms accountable has grown. To have meaningful influence, individuals must band together to engage in collective action. Groups that engage in such algorithmic collective action are likely to vary in size, membership characteristics, and crucially, objectives. In this work, we introduce a first of a kind framework for studying collective action with two or more collectives that strategically behave to manipulate data-driven systems. With more than one collective acting on a system, unexpected interactions may occur. We use this framework to conduct experiments with language model-based classifiers and recommender systems where two collectives each attempt to achieve their own individual objectives. We examine how differing objectives, strategies, sizes, and homogeneity can impact a collective's efficacy. We find that the unintentional interactions between collectives can be quite significant; a collective acting in isolation may be able to achieve their objective (e.g., improve classification outcomes for themselves or promote a particular item), but when a second collective acts simultaneously, the efficacy of the first group drops by as much as $75\%$. We find that, in the recommender system context, neither fully heterogeneous nor fully homogeneous collectives stand out as most efficacious and that heterogeneity's impact is secondary compared to collective size. Our results signal the need for more transparency in both the underlying algorithmic models and the different behaviors individuals or collectives may take on these systems. This approach also allows collectives to hold algorithmic system developers accountable and provides a framework for people to actively use their own data to promote their own interests.
Related papers
- Statistical Collusion by Collectives on Learning Platforms [49.1574468325115]
Collectives may seek to influence platforms to align with their own interests.<n>It is essential to understand the computations that collectives must perform to impact platforms in this way.<n>We develop a framework that provides a theoretical and algorithmic treatment of these issues.
arXiv Detail & Related papers (2025-02-07T12:36:23Z) - Capability-Aware Shared Hypernetworks for Flexible Heterogeneous Multi-Robot Coordination [2.681242476043447]
We propose Capability-Aware Shared Hypernetworks (CASH) to enable a single architecture to dynamically adapt to each robot and the current context.<n>CASH encodes shared decision making strategies that can be adapted to each robot based on local observations and the robots' individual and collective capabilities.
arXiv Detail & Related papers (2025-01-10T15:39:39Z) - Equitable Federated Learning with Activation Clustering [5.116582735311639]
Federated learning is a prominent distributed learning paradigm that incorporates collaboration among diverse clients.
We propose an equitable clustering-based framework where the clients are categorized/clustered based on how similar they are to each other.
arXiv Detail & Related papers (2024-10-24T23:36:39Z) - Data Similarity-Based One-Shot Clustering for Multi-Task Hierarchical Federated Learning [8.37314799155978]
We propose a one-shot clustering algorithm that can effectively identify and group users based on their data similarity.
Our proposed algorithm not only enhances the clustering process, but also overcomes challenges related to privacy concerns, communication overhead, and the need for prior knowledge about learning models or loss function behaviors.
arXiv Detail & Related papers (2024-10-03T17:51:21Z) - Federated Two Stage Decoupling With Adaptive Personalization Layers [5.69361786082969]
Federated learning has gained significant attention due to its ability to enable distributed learning while maintaining privacy constraints.
It inherently experiences significant learning degradation and slow convergence speed.
It is natural to employ the concept of clustering homogeneous clients into the same group, allowing only the model weights within each group to be aggregated.
arXiv Detail & Related papers (2023-08-30T07:46:32Z) - Beyond Submodularity: A Unified Framework of Randomized Set Selection
with Group Fairness Constraints [19.29174615532181]
We introduce a unified framework for randomized subset selection that incorporates group fairness constraints.
Our problem involves a global utility function and a set of group utility functions for each group.
Our aim is to generate a distribution across feasible subsets, specifying the selection probability of each feasible set.
arXiv Detail & Related papers (2023-04-13T15:02:37Z) - Joint Training of Deep Ensembles Fails Due to Learner Collusion [61.557412796012535]
Ensembles of machine learning models have been well established as a powerful method of improving performance over a single model.
Traditionally, ensembling algorithms train their base learners independently or sequentially with the goal of optimizing their joint performance.
We show that directly minimizing the loss of the ensemble appears to rarely be applied in practice.
arXiv Detail & Related papers (2023-01-26T18:58:07Z) - Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
Homogenization? [90.35044668396591]
A recurring theme in machine learning is algorithmic monoculture: the same systems, or systems that share components, are deployed by multiple decision-makers.
We propose the component-sharing hypothesis: if decision-makers share components like training data or specific models, then they will produce more homogeneous outcomes.
We test this hypothesis on algorithmic fairness benchmarks, demonstrating that sharing training data reliably exacerbates homogenization.
We conclude with philosophical analyses of and societal challenges for outcome homogenization, with an eye towards implications for deployed machine learning systems.
arXiv Detail & Related papers (2022-11-25T09:33:11Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Observing a group to infer individual characteristics [1.0152838128195465]
We propose a new observer algorithm that infers, based only on observed movement information, how the local neighborhood aids or hinders agent movement.
Unlike a traditional supervised learning approach, this algorithm is based on physical insights and scaling arguments, and does not rely on training-data.
Data-agnostic approaches like this have relevance to a large class of real-world problems where clean, labeled data is difficult to obtain.
arXiv Detail & Related papers (2021-10-12T09:59:54Z) - Group Collaborative Learning for Co-Salient Object Detection [152.67721740487937]
We present a novel group collaborative learning framework (GCoNet) capable of detecting co-salient objects in real time (16ms)
Extensive experiments on three challenging benchmarks, i.e., CoCA, CoSOD3k, and Cosal2015, demonstrate that our simple GCoNet outperforms 10 cutting-edge models and achieves the new state-of-the-art.
arXiv Detail & Related papers (2021-03-15T13:16:03Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.