Learning Across Decentralized Multi-Modal Remote Sensing Archives with
Federated Learning
- URL: http://arxiv.org/abs/2306.00792v1
- Date: Thu, 1 Jun 2023 15:22:53 GMT
- Title: Learning Across Decentralized Multi-Modal Remote Sensing Archives with
Federated Learning
- Authors: Bar{\i}\c{s} B\"uy\"ukta\c{s}, Gencer Sumbul, Beg\"um Demir
- Abstract summary: We introduce a novel multi-modal FL framework that aims to learn from decentralized multi-modal RS image archives for RS image problems.
The proposed framework is made up of three modules: 1) multi-modal fusion (MF); 2) whitening (FW); and 3) mutual information (MIM)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of federated learning (FL) methods, which aim to learn from
distributed databases (i.e., clients) without accessing data on clients, has
recently attracted great attention. Most of these methods assume that the
clients are associated with the same data modality. However, remote sensing
(RS) images in different clients can be associated with different data
modalities that can improve the classification performance when jointly used.
To address this problem, in this paper we introduce a novel multi-modal FL
framework that aims to learn from decentralized multi-modal RS image archives
for RS image classification problems. The proposed framework is made up of
three modules: 1) multi-modal fusion (MF); 2) feature whitening (FW); and 3)
mutual information maximization (MIM). The MF module performs iterative model
averaging to learn without accessing data on clients in the case that clients
are associated with different data modalities. The FW module aligns the
representations learned among the different clients. The MIM module maximizes
the similarity of images from different modalities. Experimental results show
the effectiveness of the proposed framework compared to iterative model
averaging, which is a widely used algorithm in FL. The code of the proposed
framework is publicly available at https://git.tu-berlin.de/rsim/MM-FL.
Related papers
- A Multi-Modal Federated Learning Framework for Remote Sensing Image Classification [2.725507329935916]
This paper introduces a novel multi-modal FL framework for RS image classification problems.
The proposed framework comprises three modules: multi-modal fusion (MF), feature whitening (FW), and mutual information module (MIM)
arXiv Detail & Related papers (2025-03-13T11:20:15Z) - MIFNet: Learning Modality-Invariant Features for Generalizable Multimodal Image Matching [54.740256498985026]
Keypoint detection and description methods often struggle with multimodal data.
We propose a modality-invariant feature learning network (MIFNet) to compute modality-invariant features for keypoint descriptions in multimodal image matching.
arXiv Detail & Related papers (2025-01-20T06:56:30Z) - Multimodality Helps Few-Shot 3D Point Cloud Semantic Segmentation [61.91492500828508]
Few-shot 3D point cloud segmentation (FS-PCS) aims at generalizing models to segment novel categories with minimal support samples.
We introduce a cost-free multimodal FS-PCS setup, utilizing textual labels and the potentially available 2D image modality.
We propose a simple yet effective Test-time Adaptive Cross-modal Seg (TACC) technique to mitigate training bias.
arXiv Detail & Related papers (2024-10-29T19:28:41Z) - Leveraging Foundation Models for Multi-modal Federated Learning with Incomplete Modality [41.79433449873368]
We propose a novel multi-modal federated learning method, Federated Multi-modal contrastiVe training with Pre-trained completion (FedMVP)
FedMVP integrates the large-scale pre-trained models to enhance the federated training.
We demonstrate that the model achieves superior performance over two real-world image-text classification datasets.
arXiv Detail & Related papers (2024-06-16T19:18:06Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Communication-Efficient Multimodal Federated Learning: Joint Modality
and Client Selection [14.261582708240407]
Multimodal Federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities.
Key challenges to multimodal FL remain unaddressed, particularly in heterogeneous network settings.
We propose mmFedMC, a new FL methodology that can tackle the above-mentioned challenges in multimodal settings.
arXiv Detail & Related papers (2024-01-30T02:16:19Z) - Multimodal Federated Learning via Contrastive Representation Ensemble [17.08211358391482]
Federated learning (FL) serves as a privacy-conscious alternative to centralized machine learning.
Existing FL methods all rely on model aggregation on single modality level.
We propose Contrastive Representation Ensemble and Aggregation for Multimodal FL (CreamFL)
arXiv Detail & Related papers (2023-02-17T14:17:44Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Personalized Federated Learning with Multi-branch Architecture [0.0]
Federated learning (FL) enables multiple clients to collaboratively train models without requiring clients to reveal their raw data to each other.
We propose a new PFL method (pFedMB) using multi-branch architecture, which achieves personalization by splitting each layer of a neural network into multiple branches and assigning client-specific weights to each branch.
We experimentally show that pFedMB performs better than the state-of-the-art PFL methods using the CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2022-11-15T06:30:57Z) - FedNS: Improving Federated Learning for collaborative image
classification on mobile clients [22.980223900446997]
Federated Learning (FL) is a paradigm that aims to support loosely connected clients in learning a global model.
We propose a new approach, termed Federated Node Selection (FedNS), for the server's global model aggregation in the FL setting.
We show with experiments from multiple datasets and networks that FedNS can consistently achieve improved performance over FedAvg.
arXiv Detail & Related papers (2021-01-20T06:45:46Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.