Collaborative Active Learning in Conditional Trust Environment
- URL: http://arxiv.org/abs/2403.18436v1
- Date: Wed, 27 Mar 2024 10:40:27 GMT
- Title: Collaborative Active Learning in Conditional Trust Environment
- Authors: Zan-Kai Chong, Hiroyuki Ohsaki, Bryan Ng,
- Abstract summary: We investigate collaborative active learning, a paradigm in which multiple collaborators explore a new domain by leveraging their combined machine learning capabilities without disclosing their existing data and models.
This collaboration offers several advantages: (a) it addresses privacy and security concerns by eliminating the need for direct model and data disclosure; (b) it enables the use of different data sources and insights without direct data exchange; and (c) it promotes cost-effectiveness and resource efficiency through shared labeling costs.
- Score: 1.3846014191157405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we investigate collaborative active learning, a paradigm in which multiple collaborators explore a new domain by leveraging their combined machine learning capabilities without disclosing their existing data and models. Instead, the collaborators share prediction results from the new domain and newly acquired labels. This collaboration offers several advantages: (a) it addresses privacy and security concerns by eliminating the need for direct model and data disclosure; (b) it enables the use of different data sources and insights without direct data exchange; and (c) it promotes cost-effectiveness and resource efficiency through shared labeling costs. To realize these benefits, we introduce a collaborative active learning framework designed to fulfill the aforementioned objectives. We validate the effectiveness of the proposed framework through simulations. The results demonstrate that collaboration leads to higher AUC scores compared to independent efforts, highlighting the framework's ability to overcome the limitations of individual models. These findings support the use of collaborative approaches in active learning, emphasizing their potential to enhance outcomes through collective expertise and shared resources. Our work provides a foundation for further research on collaborative active learning and its practical applications in various domains where data privacy, cost efficiency, and model performance are critical considerations.
Related papers
- On the effects of similarity metrics in decentralized deep learning under distributional shift [2.6763602268733626]
Decentralized Learning (DL) enables privacy-preserving collaboration among organizations or users.
In this paper, we investigate the effectiveness of various similarity metrics in DL for identifying peers for model merging.
arXiv Detail & Related papers (2024-09-16T20:48:16Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - C-ICL: Contrastive In-context Learning for Information Extraction [54.39470114243744]
c-ICL is a novel few-shot technique that leverages both correct and incorrect sample constructions to create in-context learning demonstrations.
Our experiments on various datasets indicate that c-ICL outperforms previous few-shot in-context learning methods.
arXiv Detail & Related papers (2024-02-17T11:28:08Z) - Sharing Knowledge in Multi-Task Deep Reinforcement Learning [57.38874587065694]
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.
We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks.
arXiv Detail & Related papers (2024-01-17T19:31:21Z) - A Framework for Incentivized Collaborative Learning [15.44652093599549]
We propose ICL, a general framework for incentivized collaborative learning.
We show the broad applicability of ICL to specific cases in federated learning, assisted learning, and multi-armed bandit.
arXiv Detail & Related papers (2023-05-26T16:00:59Z) - Incentivizing Honesty among Competitors in Collaborative Learning and
Optimization [5.4619385369457225]
Collaborative learning techniques have the potential to enable machine learning models that are superior to models trained on a single entity's data.
In many cases, potential participants in such collaborative schemes are competitors on a downstream task.
arXiv Detail & Related papers (2023-05-25T17:28:41Z) - Exploring Interactions and Regulations in Collaborative Learning: An
Interdisciplinary Multimodal Dataset [40.193998859310156]
This paper introduces a new multimodal dataset with cognitive and emotional triggers to explore how regulations affect interactions during the collaborative process.
A learning task with intentional interventions is designed and assigned to high school students aged 15 years old.
Analysis of annotated emotions, body gestures, and their interactions indicates that our dataset with designed treatments could effectively examine moments of regulation in collaborative learning.
arXiv Detail & Related papers (2022-10-11T12:56:36Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Gradient Assisted Learning [34.24028216079336]
We propose a new method for various entities to assist each other in supervised learning tasks without sharing data, models, and objective functions.
In this framework, all participants collaboratively optimize the aggregate of local loss functions, and each participant autonomously builds its own model.
Experimental studies demonstrate that Gradient Assisted Learning can achieve performance close to centralized learning when all data, models, and objective functions are fully disclosed.
arXiv Detail & Related papers (2021-06-02T19:12:03Z) - Rebuilding Trust in Active Learning with Actionable Metrics [77.99796068970569]
Active Learning (AL) is an active domain of research, but is seldom used in the industry despite the pressing needs.
This is in part due to a misalignment of objectives, while research strives at getting the best results on selected datasets.
We present various actionable metrics to help rebuild trust of industrial practitioners in Active Learning.
arXiv Detail & Related papers (2020-12-18T09:34:59Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.