Incentivized Collaboration in Active Learning
- URL: http://arxiv.org/abs/2311.00260v1
- Date: Wed, 1 Nov 2023 03:17:39 GMT
- Title: Incentivized Collaboration in Active Learning
- Authors: Lee Cohen, Han Shao
- Abstract summary: In collaborative active learning, where multiple agents try to learn labels from a common hypothesis, we introduce an innovative framework for incentivized collaboration.
We focus on designing (strict) individually rational (IR) collaboration protocols, ensuring that agents cannot reduce their expected label complexity by acting individually.
- Score: 17.972077492741928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In collaborative active learning, where multiple agents try to learn labels
from a common hypothesis, we introduce an innovative framework for incentivized
collaboration. Here, rational agents aim to obtain labels for their data sets
while keeping label complexity at a minimum. We focus on designing (strict)
individually rational (IR) collaboration protocols, ensuring that agents cannot
reduce their expected label complexity by acting individually. We first show
that given any optimal active learning algorithm, the collaboration protocol
that runs the algorithm as is over the entire data is already IR. However,
computing the optimal algorithm is NP-hard. We therefore provide collaboration
protocols that achieve (strict) IR and are comparable with the best known
tractable approximation algorithm in terms of label complexity.
Related papers
- Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Enhancing Adversarial Robustness in Low-Label Regime via Adaptively
Weighted Regularization and Knowledge Distillation [1.675857332621569]
We investigate semi-supervised adversarial training where labeled data is scarce.
We develop a semi-supervised adversarial training algorithm that combines the proposed regularization term with knowledge distillation.
Our proposed algorithm achieves state-of-the-art performance with significant margins compared to existing algorithms.
arXiv Detail & Related papers (2023-08-08T05:48:38Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - Communication-Efficient Collaborative Best Arm Identification [6.861971769602314]
We investigate top-$m$ arm identification, a basic problem in bandit theory, in a multi-agent learning model in which agents collaborate to learn an objective function.
We are interested in designing collaborative learning algorithms that achieve maximum speedup.
arXiv Detail & Related papers (2022-08-18T19:02:29Z) - RACA: Relation-Aware Credit Assignment for Ad-Hoc Cooperation in
Multi-Agent Deep Reinforcement Learning [55.55009081609396]
We propose a novel method, called Relation-Aware Credit Assignment (RACA), which achieves zero-shot generalization in ad-hoc cooperation scenarios.
RACA takes advantage of a graph-based encoder relation to encode the topological structure between agents.
Our method outperforms baseline methods on the StarCraftII micromanagement benchmark and ad-hoc cooperation scenarios.
arXiv Detail & Related papers (2022-06-02T03:39:27Z) - Towards Model Agnostic Federated Learning Using Knowledge Distillation [9.947968358822951]
In this work, we initiate a theoretical study of model agnostic communication protocols.
We focus on the setting where the two agents are attempting to perform kernel regression using different kernels.
Our study yields a surprising result -- the most natural algorithm of using alternating knowledge distillation (AKD) imposes overly strong regularization.
arXiv Detail & Related papers (2021-10-28T15:27:51Z) - Learning to Coordinate in Multi-Agent Systems: A Coordinated
Actor-Critic Algorithm and Finite-Time Guarantees [43.10380224532313]
We study the emergence of coordinated behavior by autonomous agents using an actor-critic (AC) algorithm.
We propose and analyze a class of coordinated actor-critic algorithms (CAC) in which individually parametrized policies have a it shared part and a it personalized part.
This work provides the first finite-sample guarantee for decentralized AC algorithm with partially personalized policies.
arXiv Detail & Related papers (2021-10-11T20:26:16Z) - Interactive Learning from Activity Description [11.068923430996575]
We present a novel interactive learning protocol that enables training request-fulfilling agents by verbally describing their activities.
Our protocol gives rise to a new family of interactive learning algorithms that offer complementary advantages against traditional algorithms like imitation learning (IL) and reinforcement learning (RL)
We develop an algorithm that practically implements this protocol and employ it to train agents in two challenging request-fulfilling problems using purely language-description feedback.
arXiv Detail & Related papers (2021-02-13T22:51:11Z) - UneVEn: Universal Value Exploration for Multi-Agent Reinforcement
Learning [53.73686229912562]
We propose a novel MARL approach called Universal Value Exploration (UneVEn)
UneVEn learns a set of related tasks simultaneously with a linear decomposition of universal successor features.
Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.
arXiv Detail & Related papers (2020-10-06T19:08:47Z) - Mining Implicit Entity Preference from User-Item Interaction Data for
Knowledge Graph Completion via Adversarial Learning [82.46332224556257]
We propose a novel adversarial learning approach by leveraging user interaction data for the Knowledge Graph Completion task.
Our generator is isolated from user interaction data, and serves to improve the performance of the discriminator.
To discover implicit entity preference of users, we design an elaborate collaborative learning algorithms based on graph neural networks.
arXiv Detail & Related papers (2020-03-28T05:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.