NeuroADDA: Active Discriminative Domain Adaptation in Connectomic
- URL: http://arxiv.org/abs/2503.06196v1
- Date: Sat, 08 Mar 2025 12:40:30 GMT
- Title: NeuroADDA: Active Discriminative Domain Adaptation in Connectomic
- Authors: Shashata Sawmya, Thomas L. Athey, Gwyneth Liu, Nir Shavit,
- Abstract summary: We introduce NeuroADDA, a method that combines optimal domain selection with source-free active learning to adapt pretrained backbones to a new dataset.<n>NeuroADDA consistently outperforms training from scratch across diverse datasets and fine-tuning sample sizes.
- Score: 3.241925400160274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training segmentation models from scratch has been the standard approach for new electron microscopy connectomics datasets. However, leveraging pretrained models from existing datasets could improve efficiency and performance in constrained annotation budget. In this study, we investigate domain adaptation in connectomics by analyzing six major datasets spanning different organisms. We show that, Maximum Mean Discrepancy (MMD) between neuron image distributions serves as a reliable indicator of transferability, and identifies the optimal source domain for transfer learning. Building on this, we introduce NeuroADDA, a method that combines optimal domain selection with source-free active learning to effectively adapt pretrained backbones to a new dataset. NeuroADDA consistently outperforms training from scratch across diverse datasets and fine-tuning sample sizes, with the largest gain observed at $n=4$ samples with a 25-67\% reduction in Variation of Information. Finally, we show that our analysis of distributional differences among neuron images from multiple species in a learned feature space reveals that these domain "distances" correlate with phylogenetic distance among those species.
Related papers
- Towards contrast- and pathology-agnostic clinical fetal brain MRI segmentation using SynthSeg [3.379673965672007]
We introduce a novel data-driven train-time sampling strategy that seeks to fully exploit the diversity of a given training dataset.
Our networks achieved notable improvements in the segmentation quality on testing subjects with intense anatomical abnormalities.
arXiv Detail & Related papers (2025-04-14T14:08:26Z) - SIDDA: SInkhorn Dynamic Domain Adaptation for Image Classification with Equivariant Neural Networks [37.69303106863453]
SIDDA is an out-of-the-box DA training algorithm built upon the Sinkhorn divergence.<n>We find that SIDDA enhances the generalization capabilities of NNs.<n>We also study the efficacy of SIDDA on ENNs with respect to the varying group orders of the dihedral group $D_N$.
arXiv Detail & Related papers (2025-01-23T19:29:34Z) - SelectiveFinetuning: Enhancing Transfer Learning in Sleep Staging through Selective Domain Alignment [3.5833494449195293]
In practical sleep stage classification, a key challenge is the variability of EEG data across different subjects and environments.<n>Our method utilizes a pretrained Multi Resolution Convolutional Neural Network (MRCNN) to extract EEG features.<n>By finetuning the model with selective source data, our SelectiveFinetuning enhances the model's performance on target domain.
arXiv Detail & Related papers (2025-01-07T13:08:54Z) - Multi-Source EEG Emotion Recognition via Dynamic Contrastive Domain Adaptation [17.956642824289453]
We introduce a multi-source dynamic contrastive domain adaptation method based on differential entropy (DE) features.<n>Our model outperforms several alternative domain adaptation methods in recognition accuracy, inter-class margin, and intra-class compactness.<n>Our study also suggests greater emotional sensitivity in the frontal and parietal brain lobes.
arXiv Detail & Related papers (2024-08-04T03:51:35Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Deep learning based domain adaptation for mitochondria segmentation on
EM volumes [5.682594415267948]
We present three unsupervised domain adaptation strategies to improve mitochondria segmentation in the target domain.
We propose a new training stopping criterion based on morphological priors obtained exclusively in the source domain.
In the absence of validation labels, monitoring our proposed morphology-based metric is an intuitive and effective way to stop the training process and select in average optimal models.
arXiv Detail & Related papers (2022-02-22T09:49:25Z) - Adversarial Domain Adaptation with Paired Examples for Acoustic Scene
Classification on Different Recording Devices [10.447270433913134]
We investigate several adversarial models for domain adaptation (DA) and their effect on the acoustic scene classification task.
The experiments are performed on the DCASE20 challenge task 1A dataset, in which we can leverage the paired examples of data recorded using different devices.
The results indicate that the best performing domain adaptation can be obtained using the cycle GAN, which achieves as much as 66% relative improvement in accuracy for the target domain device.
arXiv Detail & Related papers (2021-10-18T19:34:12Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.