DVFL: A Vertical Federated Learning Method for Dynamic Data
- URL: http://arxiv.org/abs/2111.03341v1
- Date: Fri, 5 Nov 2021 09:26:09 GMT
- Title: DVFL: A Vertical Federated Learning Method for Dynamic Data
- Authors: Yuzhi Liang and Yixiang Chen
- Abstract summary: This paper studies vertical federated learning (VFL), which tackles the scenarios where collaborating organizations share the same set of users but disjoint features.
We propose a new vertical federation learning method, DVFL, which adapts to dynamic data distribution changes through knowledge distillation.
Our extensive experimental results show that DVFL can not only obtain results close to existing VFL methods in static scenes, but also adapt to changes in data distribution in dynamic scenarios.
- Score: 2.406222636382325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning, which solves the problem of data island by connecting
multiple computational devices into a decentralized system, has become a
promising paradigm for privacy-preserving machine learning. This paper studies
vertical federated learning (VFL), which tackles the scenarios where
collaborating organizations share the same set of users but disjoint features.
Contemporary VFL methods are mainly used in static scenarios where the active
party and the passive party have all the data from the beginning and will not
change. However, the data in real life often changes dynamically. To alleviate
this problem, we propose a new vertical federation learning method, DVFL, which
adapts to dynamic data distribution changes through knowledge distillation. In
DVFL, most of the computations are held locally to improve data security and
model efficiency. Our extensive experimental results show that DVFL can not
only obtain results close to existing VFL methods in static scenes, but also
adapt to changes in data distribution in dynamic scenarios.
Related papers
- Stalactite: Toolbox for Fast Prototyping of Vertical Federated Learning Systems [37.11550251825938]
We present emphStalactite - an open-source framework for Vertical Federated Learning (VFL) systems.
VFL is a type of FL where data samples are divided by features across several data owners.
We demonstrate its use on a real-world recommendation datasets.
arXiv Detail & Related papers (2024-09-23T21:29:03Z) - UIFV: Data Reconstruction Attack in Vertical Federated Learning [5.404398887781436]
Vertical Federated Learning (VFL) facilitates collaborative machine learning without the need for participants to share raw private data.
Recent studies have revealed privacy risks where adversaries might reconstruct sensitive features through data leakage during the learning process.
Our work exposes severe privacy vulnerabilities within VFL systems that pose real threats to practical VFL applications.
arXiv Detail & Related papers (2024-06-18T13:18:52Z) - MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated
Distillation [54.2658887073461]
Dealing with non-IID data is one of the most challenging problems for federated learning.
This paper studies the joint problem of non-IID and long-tailed data in federated learning and proposes a corresponding solution called Federated Ensemble Distillation with Imbalance (FEDIC)
FEDIC uses model ensemble to take advantage of the diversity of models trained on non-IID data.
arXiv Detail & Related papers (2022-04-30T06:17:36Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Concept drift detection and adaptation for federated and continual
learning [55.41644538483948]
Smart devices can collect vast amounts of data from their environment.
This data is suitable for training machine learning models, which can significantly improve their behavior.
In this work, we present a new method, called Concept-Drift-Aware Federated Averaging.
arXiv Detail & Related papers (2021-05-27T17:01:58Z) - Privacy-Preserving Self-Taught Federated Learning for Heterogeneous Data [6.545317180430584]
Federated learning (FL) was proposed to enable joint training of a deep learning model using the local data in each party without revealing the data to others.
In this work, we propose an FL method called self-taught federated learning to address the aforementioned issues.
In this method, only latent variables are transmitted to other parties for model training, while privacy is preserved by storing the data and parameters of activations, weights, and biases locally.
arXiv Detail & Related papers (2021-02-11T08:07:51Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.