Bio-inspired fine-tuning for selective transfer learning in image classification
- URL: http://arxiv.org/abs/2601.11235v1
- Date: Fri, 16 Jan 2026 12:28:49 GMT
- Title: Bio-inspired fine-tuning for selective transfer learning in image classification
- Authors: Ana Davila, Jacinto Colan, Yasuhisa Hasegawa,
- Abstract summary: We introduce BioTune, a novel adaptive fine-tuning technique utilizing evolutionary optimization.<n>BioTune enhances transfer learning by optimally choosing which layers to freeze and adjusting learning rates for unfrozen layers.<n>BioTune consistently achieves top performance across four different CNN architectures.
- Score: 1.1371756033920992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has significantly advanced image analysis across diverse domains but often depends on large, annotated datasets for success. Transfer learning addresses this challenge by utilizing pre-trained models to tackle new tasks with limited labeled data. However, discrepancies between source and target domains can hinder effective transfer learning. We introduce BioTune, a novel adaptive fine-tuning technique utilizing evolutionary optimization. BioTune enhances transfer learning by optimally choosing which layers to freeze and adjusting learning rates for unfrozen layers. Through extensive evaluation on nine image classification datasets, spanning natural and specialized domains such as medical imaging, BioTune demonstrates superior accuracy and efficiency over state-of-the-art fine-tuning methods, including AutoRGN and LoRA, highlighting its adaptability to various data characteristics and distribution changes. Additionally, BioTune consistently achieves top performance across four different CNN architectures, underscoring its flexibility. Ablation studies provide valuable insights into the impact of BioTune's key components on overall performance. The source code is available at https://github.com/davilac/BioTune.
Related papers
- Harnessing Large Language Models for Biomedical Named Entity Recognition [4.376764535031509]
BioNER is a foundational task in medical informatics, crucial for downstream applications like drug discovery and clinical trial matching.<n>We introduce BioSelectTune, a highly efficient, data-centric framework for fine-tuning general-domain Large Language Models.<n>Our model, trained on only 50% of the curated positive data, surpasses the fully-trained baseline.
arXiv Detail & Related papers (2025-12-28T01:34:23Z) - CellPainTR: Generalizable Representation Learning for Cross-Dataset Cell Painting Analysis [51.56484100374058]
We introduce CellPainTR, a Transformer-based architecture designed to learn foundational representations of cellular morphology.<n>Our work represents a significant step towards creating truly foundational models for image-based profiling, enabling more reliable and scalable cross-study biological analysis.
arXiv Detail & Related papers (2025-09-02T03:30:07Z) - Transfer learning optimization based on evolutionary selective fine tuning [2.271776292902496]
Transfer learning offers a strategy for adapting pre-trained models to new tasks.<n>Traditional fine-tuning often involves updating all model parameters.<n>BioTune selectively fine-tunes layers to enhance transfer learning efficiency.
arXiv Detail & Related papers (2025-08-21T08:51:43Z) - Unleashing the Potential of Synthetic Images: A Study on Histopathology Image Classification [0.12499537119440242]
Histopathology image classification is crucial for the accurate identification and diagnosis of various diseases.
We show that synthetic images can effectively augment existing datasets, ultimately improving the performance of the downstream histopathology image classification task.
arXiv Detail & Related papers (2024-09-24T12:02:55Z) - Active Learning Based Domain Adaptation for Tissue Segmentation of
Histopathological Images [1.4724454726700604]
We propose a pre-trained deep neural network that uses a small set of labeled data from the target domain to select the most informative samples to label next.
We demonstrate that our approach performs with significantly fewer labeled samples compared to traditional supervised learning approaches for similar F1-scores.
arXiv Detail & Related papers (2023-03-09T13:03:01Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - SATS: Self-Attention Transfer for Continual Semantic Segmentation [50.51525791240729]
continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning.
This study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements within each image.
The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model.
arXiv Detail & Related papers (2022-03-15T06:09:28Z) - Fine-Tuning Large Neural Language Models for Biomedical Natural Language
Processing [55.52858954615655]
We conduct a systematic study on fine-tuning stability in biomedical NLP.
We show that finetuning performance may be sensitive to pretraining settings, especially in low-resource domains.
We show that these techniques can substantially improve fine-tuning performance for lowresource biomedical NLP applications.
arXiv Detail & Related papers (2021-12-15T04:20:35Z) - Domain Adaptation and Active Learning for Fine-Grained Recognition in
the Field of Biodiversity [7.24935792316121]
unsupervised domain adaptation can be used for fine-grained recognition in a biodiversity context.
Using domain adaptation and Transferable Normalization, the accuracy of the classifier could be increased by up to 12.35 %.
Surprisingly, we found that more sophisticated strategies provide better results than the random selection baseline for only one of the two datasets.
arXiv Detail & Related papers (2021-10-22T13:34:13Z) - Learning Invariant Representations across Domains and Tasks [81.30046935430791]
We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
arXiv Detail & Related papers (2021-03-03T11:18:43Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.