Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing Images
- URL: http://arxiv.org/abs/2406.02385v1
- Date: Tue, 4 Jun 2024 15:00:49 GMT
- Title: Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing Images
- Authors: Xinyang Pu, Feng Xu,
- Abstract summary: Deep learning models in satellite onboard enable real-time interpretation of remote sensing images.
This paper proposes a method based on parameter-efficient fine-tuning technology with low-rank adaptation (LoRA) module.
By fine-tuning and updating only 12.4$%$ of the model's total parameters, it is able to achieve 97$%$ to 100$%$ of the performance of full fine-tuning models.
- Score: 5.234109158596138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models in satellite onboard enable real-time interpretation of remote sensing images, reducing the need for data transmission to the ground and conserving communication resources. As satellite numbers and observation frequencies increase, the demand for satellite onboard real-time image interpretation grows, highlighting the expanding importance and development of this technology. However, updating the extensive parameters of models deployed on the satellites for spaceborne object detection model is challenging due to the limitations of uplink bandwidth in wireless satellite communications. To address this issue, this paper proposes a method based on parameter-efficient fine-tuning technology with low-rank adaptation (LoRA) module. It involves training low-rank matrix parameters and integrating them with the original model's weight matrix through multiplication and summation, thereby fine-tuning the model parameters to adapt to new data distributions with minimal weight updates. The proposed method combines parameter-efficient fine-tuning with full fine-tuning in the parameter update strategy of the oriented object detection algorithm architecture. This strategy enables model performance improvements close to full fine-tuning effects with minimal parameter updates. In addition, low rank approximation is conducted to pick an optimal rank value for LoRA matrices. Extensive experiments verify the effectiveness of the proposed method. By fine-tuning and updating only 12.4$\%$ of the model's total parameters, it is able to achieve 97$\%$ to 100$\%$ of the performance of full fine-tuning models. Additionally, the reduced number of trainable parameters accelerates model training iterations and enhances the generalization and robustness of the oriented object detection model. The source code is available at: \url{https://github.com/fudanxu/LoRA-Det}.
Related papers
- LoRTA: Low Rank Tensor Adaptation of Large Language Models [70.32218116940393]
Low Rank Adaptation (LoRA) is a popular Efficient Fine Tuning (PEFT) method that effectively adapts large pre-trained models for downstream tasks.
We propose a novel approach that employs a low rank tensor parametrization for model updates.
Our method is both efficient and effective for fine-tuning large language models, achieving a substantial reduction in the number of parameters while maintaining comparable performance.
arXiv Detail & Related papers (2024-10-05T06:59:50Z) - Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape [52.98187034726091]
Low-Rank Adaptation (LoRA) is an efficient way to fine-tune models by optimizing only a low-rank matrix.
A solution that appears flat in the LoRA space may exist sharp directions in the full parameter space, potentially harming generalization performance.
We propose Flat-LoRA, an efficient approach that seeks a low-rank adaptation located in a flat region of the full parameter space.
arXiv Detail & Related papers (2024-09-22T11:24:10Z) - SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.
We propose a novel model fine-tuning method to make full use of these ineffective parameters.
Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Low-Rank Representations Meets Deep Unfolding: A Generalized and
Interpretable Network for Hyperspectral Anomaly Detection [41.50904949744355]
Current hyperspectral anomaly detection (HAD) benchmark datasets suffer from low resolution, simple background, and small size of the detection data.
These factors also limit the performance of the well-known low-rank representation (LRR) models in terms of robustness.
We build a new set of HAD benchmark datasets for improving the robustness of the HAD algorithm in complex scenarios, AIR-HAD for short.
arXiv Detail & Related papers (2024-02-23T14:15:58Z) - Flora: Low-Rank Adapters Are Secretly Gradient Compressors [30.224822087562163]
Low-rank adaptation (LoRA) is proposed to reduce the optimization states by training fewer parameters.
LoRA restricts overall weight update matrices to be low-rank, limiting the model performance.
We propose Flora, which is able to achieve high-rank updates by resampling the projection matrices.
arXiv Detail & Related papers (2024-02-05T18:50:39Z) - SatDM: Synthesizing Realistic Satellite Image with Semantic Layout
Conditioning using Diffusion Models [0.0]
Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated significant promise in synthesizing realistic images from semantic layouts.
In this paper, a conditional DDPM model capable of taking a semantic map and generating high-quality, diverse, and correspondingly accurate satellite images is implemented.
The effectiveness of our proposed model is validated using a meticulously labeled dataset introduced within the context of this study.
arXiv Detail & Related papers (2023-09-28T19:39:13Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - High-order Spatial Interactions Enhanced Lightweight Model for Optical
Remote Sensing Image-based Small Ship Detection [18.60170221864557]
We propose a novel lightweight framework called textitHSI-ShipDetectionNet that is based on high-order spatial interactions.
HSI-ShipDetectionNet includes a prediction branch specifically for tiny ships and a lightweight hybrid attention block for reduced complexity.
Our model is evaluated using the public Kaggle marine ship detection dataset and compared with multiple state-of-the-art models.
arXiv Detail & Related papers (2023-04-07T18:40:49Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.