Probabilistic Deep Discriminant Analysis for Wind Blade Segmentation
- URL: http://arxiv.org/abs/2601.13852v1
- Date: Tue, 20 Jan 2026 11:03:51 GMT
- Title: Probabilistic Deep Discriminant Analysis for Wind Blade Segmentation
- Authors: Raül Pérez-Gonzalo, Andreas Espersen, Antonio Agudo,
- Abstract summary: We introduce Deep Discriminant Analysis (DDA), which directly optimize the Fisher criterion utilizing deep networks.<n>We present two stable DDA loss functions and augment them with a probability loss, resulting in Probabilistic DDA (PDDA)<n>PDDA effectively minimizes class overlap in output distributions, producing highly confident predictions with reduced within-class variance.
- Score: 18.108693090007748
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Linear discriminant analysis improves class separability but struggles with non-linearly separable data. To overcome this, we introduce Deep Discriminant Analysis (DDA), which directly optimizes the Fisher criterion utilizing deep networks. To ensure stable training and avoid computational instabilities, we incorporate signed between-class variance, bound outputs with a sigmoid function, and convert multiplicative relationships into additive ones. We present two stable DDA loss functions and augment them with a probability loss, resulting in Probabilistic DDA (PDDA). PDDA effectively minimizes class overlap in output distributions, producing highly confident predictions with reduced within-class variance. When applied to wind blade segmentation, PDDA showcases notable advances in performance and consistency, critical for wind energy maintenance. To our knowledge, this is the first application of DDA to image segmentation.
Related papers
- Deep Linear Discriminant Analysis Revisited [3.569867801312133]
We show that for unconstrained Deep Linear Discriminant Analysis (LDA) classifiers, maximum-likelihood training admits pathological solutions.<n>We introduce the emphDiscriminative Negative Log-Likelihood (DNLL) loss, which augments the LDA log-likelihood with a simple penalty on the mixture density.
arXiv Detail & Related papers (2026-01-04T17:59:11Z) - Distributionally Robust Optimization with Adversarial Data Contamination [49.89480853499918]
We focus on optimizing Wasserstein-1 DRO objectives for generalized linear models with convex Lipschitz loss functions.<n>Our primary contribution lies in a novel modeling framework that integrates robustness against training data contamination with robustness against distributional shifts.<n>This work establishes the first rigorous guarantees, supported by efficient computation, for learning under the dual challenges of data contamination and distributional shifts.
arXiv Detail & Related papers (2025-07-14T18:34:10Z) - Unveiling the Superior Paradigm: A Comparative Study of Source-Free Domain Adaptation and Unsupervised Domain Adaptation [52.36436121884317]
We show that Source-Free Domain Adaptation (SFDA) generally outperforms Unsupervised Domain Adaptation (UDA) in real-world scenarios.
SFDA offers advantages in time efficiency, storage requirements, targeted learning objectives, reduced risk of negative transfer, and increased robustness against overfitting.
We propose a novel weight estimation method that effectively integrates available source data into multi-SFDA approaches.
arXiv Detail & Related papers (2024-11-24T13:49:29Z) - CoSDA: Continual Source-Free Domain Adaptation [78.47274343972904]
Without access to the source data, source-free domain adaptation (SFDA) transfers knowledge from a source-domain trained model to target domains.
Recently, SFDA has gained popularity due to the need to protect the data privacy of the source domain, but it suffers from catastrophic forgetting on the source domain due to the lack of data.
We propose a continual source-free domain adaptation approach named CoSDA, which employs a dual-speed optimized teacher-student model pair and is equipped with consistency learning capability.
arXiv Detail & Related papers (2023-04-13T15:53:23Z) - Confidence Attention and Generalization Enhanced Distillation for
Continuous Video Domain Adaptation [62.458968086881555]
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to a series of individually available changing target domains.
We propose a Confidence-Attentive network with geneRalization enhanced self-knowledge disTillation (CART) to address the challenge in CVDA.
arXiv Detail & Related papers (2023-03-18T16:40:10Z) - The Power and Limitation of Pretraining-Finetuning for Linear Regression
under Covariate Shift [127.21287240963859]
We investigate a transfer learning approach with pretraining on the source data and finetuning based on the target data.
For a large class of linear regression instances, transfer learning with $O(N2)$ source data is as effective as supervised learning with $N$ target data.
arXiv Detail & Related papers (2022-08-03T05:59:49Z) - Bures Joint Distribution Alignment with Dynamic Margin for Unsupervised
Domain Adaptation [17.06364218327213]
Unsupervised domain adaptation (UDA) is one of the prominent tasks of transfer learning.
We propose a novel alignment loss term that minimizes the kernel Bures-Wasserstein distance between the joint distributions.
Experiments show that BJDA is very effective for the UDA tasks, as it outperforms state-of-the-art algorithms in most experimental settings.
arXiv Detail & Related papers (2022-03-14T03:20:01Z) - FRuDA: Framework for Distributed Adversarial Domain Adaptation [15.054387071537567]
Unsupervised domain adaptation (uDA) can help in adapting models from a label-rich source domain to unlabeled target domains.
We introduce FRuDA: an end-to-end framework for distributed adversarial uDA.
We show that FRuDA can boost target domain accuracy by up to 50% and improve the training efficiency of adversarial uDA by at least 11 times.
arXiv Detail & Related papers (2021-12-26T13:58:55Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z) - Regularized Deep Linear Discriminant Analysis [26.08062442399418]
As a non-linear extension of the classic Linear Discriminant Analysis(LDA), Deep Linear Discriminant Analysis(DLDA) replaces the original Categorical Cross Entropy(CCE) loss function.
Regularization method on within-class scatter matrix is proposed to strengthen the discriminative ability of each dimension.
arXiv Detail & Related papers (2021-05-15T03:54:32Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.